I0817 21:38:02.533872 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0817 21:38:02.539722 7 e2e.go:109] Starting e2e run "390d212d-e9c9-47f2-91e3-5c34330eddb6" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597700269 - Will randomize all specs Will run 278 of 4844 specs Aug 17 21:38:03.073: INFO: >>> kubeConfig: /root/.kube/config Aug 17 21:38:03.156: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 17 21:38:03.347: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 17 21:38:03.513: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 17 21:38:03.513: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 17 21:38:03.514: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 17 21:38:03.557: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 17 21:38:03.557: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 17 21:38:03.557: INFO: e2e test version: v1.17.11 Aug 17 21:38:03.563: INFO: kube-apiserver version: v1.17.5 Aug 17 21:38:03.566: INFO: >>> kubeConfig: /root/.kube/config Aug 17 21:38:03.590: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:38:03.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Aug 17 21:38:03.726: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 17 21:38:03.731: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:38:11.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1916" for this suite. • [SLOW TEST:8.329 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":21,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:38:11.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-160836ba-1bbf-4b03-ac6b-c6890e6cb018 STEP: Creating a pod to test consume configMaps Aug 17 21:38:12.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f4ae32a5-3a62-4375-b725-91d743a0238e" in namespace "projected-8727" to be "success or failure" Aug 17 21:38:12.165: INFO: Pod "pod-projected-configmaps-f4ae32a5-3a62-4375-b725-91d743a0238e": Phase="Pending", Reason="", readiness=false. Elapsed: 70.527801ms Aug 17 21:38:14.805: INFO: Pod "pod-projected-configmaps-f4ae32a5-3a62-4375-b725-91d743a0238e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711628415s Aug 17 21:38:16.878: INFO: Pod "pod-projected-configmaps-f4ae32a5-3a62-4375-b725-91d743a0238e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.783695978s Aug 17 21:38:18.886: INFO: Pod "pod-projected-configmaps-f4ae32a5-3a62-4375-b725-91d743a0238e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.792567887s STEP: Saw pod success Aug 17 21:38:18.887: INFO: Pod "pod-projected-configmaps-f4ae32a5-3a62-4375-b725-91d743a0238e" satisfied condition "success or failure" Aug 17 21:38:18.893: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-f4ae32a5-3a62-4375-b725-91d743a0238e container projected-configmap-volume-test: STEP: delete the pod Aug 17 21:38:19.272: INFO: Waiting for pod pod-projected-configmaps-f4ae32a5-3a62-4375-b725-91d743a0238e to disappear Aug 17 21:38:19.572: INFO: Pod pod-projected-configmaps-f4ae32a5-3a62-4375-b725-91d743a0238e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:38:19.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8727" for this suite. • [SLOW TEST:7.666 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":25,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:38:19.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 17 21:38:26.991: INFO: Successfully updated pod "adopt-release-4rd4p" STEP: Checking that the Job readopts the Pod Aug 17 21:38:26.992: INFO: Waiting up to 15m0s for pod "adopt-release-4rd4p" in namespace "job-2725" to be "adopted" Aug 17 21:38:27.007: INFO: Pod "adopt-release-4rd4p": Phase="Running", Reason="", readiness=true. Elapsed: 15.324972ms Aug 17 21:38:27.008: INFO: Pod "adopt-release-4rd4p" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 17 21:38:27.525: INFO: Successfully updated pod "adopt-release-4rd4p" STEP: Checking that the Job releases the Pod Aug 17 21:38:27.526: INFO: Waiting up to 15m0s for pod "adopt-release-4rd4p" in namespace "job-2725" to be "released" Aug 17 21:38:27.848: INFO: Pod "adopt-release-4rd4p": Phase="Running", Reason="", readiness=true. Elapsed: 322.564146ms Aug 17 21:38:27.849: INFO: Pod "adopt-release-4rd4p" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:38:27.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2725" for this suite. • [SLOW TEST:8.471 seconds] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":3,"skipped":27,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:38:28.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 17 21:38:28.829: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Aug 17 21:38:32.291: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 17 21:38:35.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733297112, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733297112, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733297112, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733297111, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 21:38:38.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733297112, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733297112, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733297112, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733297111, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 17 21:38:41.045: INFO: Waited 921.034116ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:38:42.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3833" for this suite. • [SLOW TEST:14.709 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":4,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:38:42.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Aug 17 21:38:42.965: INFO: Waiting up to 5m0s for pod "var-expansion-5f621786-2081-4688-afab-a71399649974" in namespace "var-expansion-3058" to be "success or failure" Aug 17 21:38:42.993: INFO: Pod "var-expansion-5f621786-2081-4688-afab-a71399649974": Phase="Pending", Reason="", readiness=false. Elapsed: 27.151474ms Aug 17 21:38:45.033: INFO: Pod "var-expansion-5f621786-2081-4688-afab-a71399649974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067496392s Aug 17 21:38:47.201: INFO: Pod "var-expansion-5f621786-2081-4688-afab-a71399649974": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235554597s Aug 17 21:38:49.212: INFO: Pod "var-expansion-5f621786-2081-4688-afab-a71399649974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.246858322s STEP: Saw pod success Aug 17 21:38:49.213: INFO: Pod "var-expansion-5f621786-2081-4688-afab-a71399649974" satisfied condition "success or failure" Aug 17 21:38:49.345: INFO: Trying to get logs from node jerma-worker pod var-expansion-5f621786-2081-4688-afab-a71399649974 container dapi-container: STEP: delete the pod Aug 17 21:38:50.266: INFO: Waiting for pod var-expansion-5f621786-2081-4688-afab-a71399649974 to disappear Aug 17 21:38:50.296: INFO: Pod var-expansion-5f621786-2081-4688-afab-a71399649974 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:38:50.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3058" for this suite. • [SLOW TEST:7.987 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":66,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:38:50.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:38:52.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1209" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:38:52.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 17 21:38:54.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 17 21:38:55.633: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T21:38:55Z generation:1 name:name1 resourceVersion:869800 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6d3f5e6c-e8b3-4e53-a1e1-fc1efdbec4b0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 17 21:39:05.963: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T21:39:05Z generation:1 name:name2 resourceVersion:869858 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:9ccbae4d-09cb-4651-8272-4d139cdeca11] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 17 21:39:16.016: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T21:38:55Z generation:2 name:name1 resourceVersion:869911 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6d3f5e6c-e8b3-4e53-a1e1-fc1efdbec4b0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 17 21:39:26.124: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T21:39:05Z generation:2 name:name2 resourceVersion:869986 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:9ccbae4d-09cb-4651-8272-4d139cdeca11] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 17 21:39:36.132: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T21:38:55Z generation:2 name:name1 resourceVersion:870033 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6d3f5e6c-e8b3-4e53-a1e1-fc1efdbec4b0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 17 21:39:46.143: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-17T21:39:05Z generation:2 name:name2 resourceVersion:870086 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:9ccbae4d-09cb-4651-8272-4d139cdeca11] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:39:56.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-6676" for this suite. • [SLOW TEST:63.873 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":7,"skipped":120,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:39:56.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 17 21:39:57.715: INFO: Pod name wrapped-volume-race-b6249652-5116-4187-858d-03456e7d5179: Found 0 pods out of 5 Aug 17 21:40:02.729: INFO: Pod name wrapped-volume-race-b6249652-5116-4187-858d-03456e7d5179: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b6249652-5116-4187-858d-03456e7d5179 in namespace emptydir-wrapper-6518, will wait for the garbage collector to delete the pods Aug 17 21:40:19.125: INFO: Deleting ReplicationController wrapped-volume-race-b6249652-5116-4187-858d-03456e7d5179 took: 110.043963ms Aug 17 21:40:19.529: INFO: Terminating ReplicationController wrapped-volume-race-b6249652-5116-4187-858d-03456e7d5179 pods took: 403.605889ms STEP: Creating RC which spawns configmap-volume pods Aug 17 21:40:33.489: INFO: Pod name wrapped-volume-race-3be6e3f8-739b-449d-9333-65fc30a75c90: Found 0 pods out of 5 Aug 17 21:40:39.019: INFO: Pod name wrapped-volume-race-3be6e3f8-739b-449d-9333-65fc30a75c90: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3be6e3f8-739b-449d-9333-65fc30a75c90 in namespace emptydir-wrapper-6518, will wait for the garbage collector to delete the pods Aug 17 21:40:59.221: INFO: Deleting ReplicationController wrapped-volume-race-3be6e3f8-739b-449d-9333-65fc30a75c90 took: 6.699911ms Aug 17 21:40:59.422: INFO: Terminating ReplicationController wrapped-volume-race-3be6e3f8-739b-449d-9333-65fc30a75c90 pods took: 200.907087ms STEP: Creating RC which spawns configmap-volume pods Aug 17 21:41:11.999: INFO: Pod name wrapped-volume-race-f8c87480-7403-4092-8950-1835eeb6da2e: Found 0 pods out of 5 Aug 17 21:41:17.019: INFO: Pod name wrapped-volume-race-f8c87480-7403-4092-8950-1835eeb6da2e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f8c87480-7403-4092-8950-1835eeb6da2e in namespace emptydir-wrapper-6518, will wait for the garbage collector to delete the pods Aug 17 21:41:33.799: INFO: Deleting ReplicationController wrapped-volume-race-f8c87480-7403-4092-8950-1835eeb6da2e took: 7.129973ms Aug 17 21:41:34.400: INFO: Terminating ReplicationController wrapped-volume-race-f8c87480-7403-4092-8950-1835eeb6da2e pods took: 600.995867ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:41:53.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6518" for this suite. • [SLOW TEST:117.046 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":8,"skipped":132,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:41:53.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 17 21:41:53.963: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:41:53.993: INFO: Number of nodes with available pods: 0 Aug 17 21:41:53.993: INFO: Node jerma-worker is running more than one daemon pod Aug 17 21:41:55.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:41:55.010: INFO: Number of nodes with available pods: 0 Aug 17 21:41:55.010: INFO: Node jerma-worker is running more than one daemon pod Aug 17 21:41:56.081: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:41:56.134: INFO: Number of nodes with available pods: 0 Aug 17 21:41:56.135: INFO: Node jerma-worker is running more than one daemon pod Aug 17 21:41:57.005: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:41:57.012: INFO: Number of nodes with available pods: 0 Aug 17 21:41:57.012: INFO: Node jerma-worker is running more than one daemon pod Aug 17 21:41:58.247: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:41:58.538: INFO: Number of nodes with available pods: 0 Aug 17 21:41:58.539: INFO: Node jerma-worker is running more than one daemon pod Aug 17 21:41:59.000: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:41:59.023: INFO: Number of nodes with available pods: 1 Aug 17 21:41:59.023: INFO: Node jerma-worker is running more than one daemon pod Aug 17 21:42:00.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:42:00.023: INFO: Number of nodes with available pods: 2 Aug 17 21:42:00.024: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 17 21:42:00.104: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:42:00.124: INFO: Number of nodes with available pods: 1 Aug 17 21:42:00.124: INFO: Node jerma-worker2 is running more than one daemon pod Aug 17 21:42:01.139: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:42:01.168: INFO: Number of nodes with available pods: 1 Aug 17 21:42:01.168: INFO: Node jerma-worker2 is running more than one daemon pod Aug 17 21:42:02.134: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:42:02.193: INFO: Number of nodes with available pods: 1 Aug 17 21:42:02.193: INFO: Node jerma-worker2 is running more than one daemon pod Aug 17 21:42:03.258: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:42:03.341: INFO: Number of nodes with available pods: 1 Aug 17 21:42:03.341: INFO: Node jerma-worker2 is running more than one daemon pod Aug 17 21:42:04.248: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 17 21:42:04.308: INFO: Number of nodes with available pods: 2 Aug 17 21:42:04.309: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1823, will wait for the garbage collector to delete the pods Aug 17 21:42:04.611: INFO: Deleting DaemonSet.extensions daemon-set took: 69.498449ms Aug 17 21:42:04.812: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.748322ms Aug 17 21:42:11.718: INFO: Number of nodes with available pods: 0 Aug 17 21:42:11.718: INFO: Number of running nodes: 0, number of available pods: 0 Aug 17 21:42:11.744: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1823/daemonsets","resourceVersion":"871592"},"items":null} Aug 17 21:42:11.753: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1823/pods","resourceVersion":"871592"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:42:11.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1823" for this suite. • [SLOW TEST:18.059 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":9,"skipped":137,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:42:11.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 17 21:42:31.909: INFO: Container started at 2020-08-17 21:42:14 +0000 UTC, pod became ready at 2020-08-17 21:42:31 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:42:31.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7662" for this suite. • [SLOW TEST:20.143 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:42:31.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 17 21:42:32.013: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 17 21:42:32.078: INFO: Waiting for terminating namespaces to be deleted... Aug 17 21:42:32.092: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 17 21:42:32.128: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 17 21:42:32.128: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 21:42:32.128: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 17 21:42:32.128: INFO: Container kindnet-cni ready: true, restart count 0 Aug 17 21:42:32.128: INFO: test-webserver-333d1445-7acc-4088-b9e4-f55b70cc5a4e from container-probe-7662 started at 2020-08-17 21:42:11 +0000 UTC (1 container statuses recorded) Aug 17 21:42:32.128: INFO: Container test-webserver ready: true, restart count 0 Aug 17 21:42:32.128: INFO: rally-a8e40cd8-6g9vmnyf-t4ks6 from c-rally-a8e40cd8-w1wpepcn started at 2020-08-17 21:42:17 +0000 UTC (1 container statuses recorded) Aug 17 21:42:32.128: INFO: Container rally-a8e40cd8-6g9vmnyf ready: true, restart count 0 Aug 17 21:42:32.128: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 17 21:42:32.181: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 17 21:42:32.181: INFO: Container kube-proxy ready: true, restart count 0 Aug 17 21:42:32.181: INFO: rally-a8e40cd8-6g9vmnyf-4gxvl from c-rally-a8e40cd8-w1wpepcn started at 2020-08-17 21:42:22 +0000 UTC (1 container statuses recorded) Aug 17 21:42:32.181: INFO: Container rally-a8e40cd8-6g9vmnyf ready: false, restart count 0 Aug 17 21:42:32.181: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 17 21:42:32.181: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162c2c2ff2c17324], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:42:33.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1371" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":11,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:42:33.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:42:33.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3663" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":12,"skipped":201,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:42:33.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create services for rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Aug 17 21:42:33.491: INFO: namespace kubectl-8022 Aug 17 21:42:33.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8022' Aug 17 21:42:39.000: INFO: stderr: "" Aug 17 21:42:39.001: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 17 21:42:40.012: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 21:42:40.013: INFO: Found 0 / 1 Aug 17 21:42:41.137: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 21:42:41.138: INFO: Found 0 / 1 Aug 17 21:42:42.032: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 21:42:42.032: INFO: Found 0 / 1 Aug 17 21:42:43.023: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 21:42:43.023: INFO: Found 0 / 1 Aug 17 21:42:44.009: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 21:42:44.010: INFO: Found 1 / 1 Aug 17 21:42:44.010: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 17 21:42:44.016: INFO: Selector matched 1 pods for map[app:agnhost] Aug 17 21:42:44.016: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 17 21:42:44.016: INFO: wait on agnhost-master startup in kubectl-8022 Aug 17 21:42:44.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-n46n8 agnhost-master --namespace=kubectl-8022' Aug 17 21:42:45.354: INFO: stderr: "" Aug 17 21:42:45.354: INFO: stdout: "Paused\n" STEP: exposing RC Aug 17 21:42:45.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8022' Aug 17 21:42:46.807: INFO: stderr: "" Aug 17 21:42:46.807: INFO: stdout: "service/rm2 exposed\n" Aug 17 21:42:46.915: INFO: Service rm2 in namespace kubectl-8022 found. STEP: exposing service Aug 17 21:42:48.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8022' Aug 17 21:42:50.370: INFO: stderr: "" Aug 17 21:42:50.370: INFO: stdout: "service/rm3 exposed\n" Aug 17 21:42:50.399: INFO: Service rm3 in namespace kubectl-8022 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:42:52.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8022" for this suite. • [SLOW TEST:19.207 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189 should create services for rc [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":13,"skipped":208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:42:52.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:43:05.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4839" for this suite. • [SLOW TEST:12.416 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":14,"skipped":236,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:43:05.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 17 21:43:05.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4121' Aug 17 21:43:07.277: INFO: stderr: "" Aug 17 21:43:07.277: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765 Aug 17 21:43:07.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4121' Aug 17 21:43:10.756: INFO: stderr: "" Aug 17 21:43:10.756: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:43:10.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4121" for this suite. • [SLOW TEST:5.928 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":15,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:43:10.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 17 21:43:15.585: INFO: &Pod{ObjectMeta:{send-events-d98b1c7a-da35-4bb9-b075-9489b13124f1 events-6621 /api/v1/namespaces/events-6621/pods/send-events-d98b1c7a-da35-4bb9-b075-9489b13124f1 9a1bbfe7-1c6f-448b-b1a9-f1211e4746dd 872079 0 2020-08-17 21:43:11 +0000 UTC map[name:foo time:139233756] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dsg25,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dsg25,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dsg25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 21:43:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 21:43:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 21:43:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 21:43:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.213,StartTime:2020-08-17 21:43:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 21:43:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c66b28bd523d27ab288b21084fe278be9b1e74adbac77ab2988f007f75d27bc3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 17 21:43:17.803: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 17 21:43:19.813: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:43:19.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6621" for this suite. • [SLOW TEST:9.369 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":16,"skipped":288,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:43:20.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Aug 17 21:43:20.878: INFO: Waiting up to 5m0s for pod "client-containers-0d7fc469-ac52-484a-81d1-21c4db2afb2b" in namespace "containers-8092" to be "success or failure" Aug 17 21:43:21.131: INFO: Pod "client-containers-0d7fc469-ac52-484a-81d1-21c4db2afb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 252.420861ms Aug 17 21:43:23.226: INFO: Pod "client-containers-0d7fc469-ac52-484a-81d1-21c4db2afb2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347562645s Aug 17 21:43:25.234: INFO: Pod "client-containers-0d7fc469-ac52-484a-81d1-21c4db2afb2b": Phase="Running", Reason="", readiness=true. Elapsed: 4.355278741s Aug 17 21:43:27.242: INFO: Pod "client-containers-0d7fc469-ac52-484a-81d1-21c4db2afb2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.363607874s STEP: Saw pod success Aug 17 21:43:27.242: INFO: Pod "client-containers-0d7fc469-ac52-484a-81d1-21c4db2afb2b" satisfied condition "success or failure" Aug 17 21:43:27.246: INFO: Trying to get logs from node jerma-worker2 pod client-containers-0d7fc469-ac52-484a-81d1-21c4db2afb2b container test-container: STEP: delete the pod Aug 17 21:43:27.287: INFO: Waiting for pod client-containers-0d7fc469-ac52-484a-81d1-21c4db2afb2b to disappear Aug 17 21:43:27.345: INFO: Pod client-containers-0d7fc469-ac52-484a-81d1-21c4db2afb2b no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:43:27.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8092" for this suite. • [SLOW TEST:7.031 seconds] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:43:27.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Aug 17 21:43:27.419: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:43:35.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3538" for this suite. • [SLOW TEST:7.980 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":18,"skipped":317,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:43:35.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 17 21:43:35.413: INFO: Waiting up to 5m0s for pod "pod-179f98ea-dcf8-453b-a1d9-ac1e47d98978" in namespace "emptydir-5721" to be "success or failure" Aug 17 21:43:35.427: INFO: Pod "pod-179f98ea-dcf8-453b-a1d9-ac1e47d98978": Phase="Pending", Reason="", readiness=false. Elapsed: 13.585034ms Aug 17 21:43:37.434: INFO: Pod "pod-179f98ea-dcf8-453b-a1d9-ac1e47d98978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020496639s Aug 17 21:43:39.441: INFO: Pod "pod-179f98ea-dcf8-453b-a1d9-ac1e47d98978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027182956s STEP: Saw pod success Aug 17 21:43:39.441: INFO: Pod "pod-179f98ea-dcf8-453b-a1d9-ac1e47d98978" satisfied condition "success or failure" Aug 17 21:43:39.446: INFO: Trying to get logs from node jerma-worker pod pod-179f98ea-dcf8-453b-a1d9-ac1e47d98978 container test-container: STEP: delete the pod Aug 17 21:43:39.620: INFO: Waiting for pod pod-179f98ea-dcf8-453b-a1d9-ac1e47d98978 to disappear Aug 17 21:43:39.626: INFO: Pod pod-179f98ea-dcf8-453b-a1d9-ac1e47d98978 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:43:39.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5721" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":322,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:43:39.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 17 21:43:46.445: INFO: Successfully updated pod "pod-update-activedeadlineseconds-58313028-72c4-4e32-96e1-b93b5686e76e" Aug 17 21:43:46.445: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-58313028-72c4-4e32-96e1-b93b5686e76e" in namespace "pods-1468" to be "terminated due to deadline exceeded" Aug 17 21:43:46.469: INFO: Pod "pod-update-activedeadlineseconds-58313028-72c4-4e32-96e1-b93b5686e76e": Phase="Running", Reason="", readiness=true. Elapsed: 23.930248ms Aug 17 21:43:48.477: INFO: Pod "pod-update-activedeadlineseconds-58313028-72c4-4e32-96e1-b93b5686e76e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.031156331s Aug 17 21:43:48.477: INFO: Pod "pod-update-activedeadlineseconds-58313028-72c4-4e32-96e1-b93b5686e76e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:43:48.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1468" for this suite. • [SLOW TEST:8.857 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":332,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:43:48.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 17 21:43:48.567: INFO: Waiting up to 5m0s for pod "pod-98596897-1cf5-4722-a798-7acd929695ff" in namespace "emptydir-2732" to be "success or failure" Aug 17 21:43:48.573: INFO: Pod "pod-98596897-1cf5-4722-a798-7acd929695ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007889ms Aug 17 21:43:50.581: INFO: Pod "pod-98596897-1cf5-4722-a798-7acd929695ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014314807s Aug 17 21:43:52.658: INFO: Pod "pod-98596897-1cf5-4722-a798-7acd929695ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090931273s STEP: Saw pod success Aug 17 21:43:52.658: INFO: Pod "pod-98596897-1cf5-4722-a798-7acd929695ff" satisfied condition "success or failure" Aug 17 21:43:52.675: INFO: Trying to get logs from node jerma-worker pod pod-98596897-1cf5-4722-a798-7acd929695ff container test-container: STEP: delete the pod Aug 17 21:43:52.789: INFO: Waiting for pod pod-98596897-1cf5-4722-a798-7acd929695ff to disappear Aug 17 21:43:52.793: INFO: Pod pod-98596897-1cf5-4722-a798-7acd929695ff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:43:52.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2732" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":338,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:43:52.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 17 21:43:58.669: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:43:58.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-715" for this suite. • [SLOW TEST:6.086 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":343,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:43:58.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8464.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8464.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8464.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8464.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8464.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8464.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 17 21:44:05.314: INFO: DNS probes using dns-8464/dns-test-3b0be0f6-f658-434e-b640-ff042734cdb3 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:44:05.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8464" for this suite. • [SLOW TEST:6.580 seconds] [sig-network] DNS /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":23,"skipped":356,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:44:05.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6052 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-6052 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6052 Aug 17 21:44:05.974: INFO: Found 0 stateful pods, waiting for 1 Aug 17 21:44:15.983: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 17 21:44:15.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 21:44:17.533: INFO: stderr: "I0817 21:44:17.367670 194 log.go:172] (0x4000ab0bb0) (0x400071c1e0) Create stream\nI0817 21:44:17.372794 194 log.go:172] (0x4000ab0bb0) (0x400071c1e0) Stream added, broadcasting: 1\nI0817 21:44:17.383636 194 log.go:172] (0x4000ab0bb0) Reply frame received for 1\nI0817 21:44:17.384180 194 log.go:172] (0x4000ab0bb0) (0x400053f360) Create stream\nI0817 21:44:17.384243 194 log.go:172] (0x4000ab0bb0) (0x400053f360) Stream added, broadcasting: 3\nI0817 21:44:17.386286 194 log.go:172] (0x4000ab0bb0) Reply frame received for 3\nI0817 21:44:17.386850 194 log.go:172] (0x4000ab0bb0) (0x4000786000) Create stream\nI0817 21:44:17.386971 194 log.go:172] (0x4000ab0bb0) (0x4000786000) Stream added, broadcasting: 5\nI0817 21:44:17.388962 194 log.go:172] (0x4000ab0bb0) Reply frame received for 5\nI0817 21:44:17.477994 194 log.go:172] (0x4000ab0bb0) Data frame received for 5\nI0817 21:44:17.478346 194 log.go:172] (0x4000786000) (5) Data frame handling\nI0817 21:44:17.479169 194 log.go:172] (0x4000786000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 21:44:17.510302 194 log.go:172] (0x4000ab0bb0) Data frame received for 5\nI0817 21:44:17.510572 194 log.go:172] (0x4000786000) (5) Data frame handling\nI0817 21:44:17.510801 194 log.go:172] (0x4000ab0bb0) Data frame received for 3\nI0817 21:44:17.510930 194 log.go:172] (0x400053f360) (3) Data frame handling\nI0817 21:44:17.511073 194 log.go:172] (0x400053f360) (3) Data frame sent\nI0817 21:44:17.511234 194 log.go:172] (0x4000ab0bb0) Data frame received for 3\nI0817 21:44:17.511357 194 log.go:172] (0x400053f360) (3) Data frame handling\nI0817 21:44:17.512488 194 log.go:172] (0x4000ab0bb0) Data frame received for 1\nI0817 21:44:17.512641 194 log.go:172] (0x400071c1e0) (1) Data frame handling\nI0817 21:44:17.512976 194 log.go:172] (0x400071c1e0) (1) Data frame sent\nI0817 21:44:17.514336 194 log.go:172] (0x4000ab0bb0) (0x400071c1e0) Stream removed, broadcasting: 1\nI0817 21:44:17.517786 194 log.go:172] (0x4000ab0bb0) Go away received\nI0817 21:44:17.522842 194 log.go:172] (0x4000ab0bb0) (0x400071c1e0) Stream removed, broadcasting: 1\nI0817 21:44:17.523137 194 log.go:172] (0x4000ab0bb0) (0x400053f360) Stream removed, broadcasting: 3\nI0817 21:44:17.523337 194 log.go:172] (0x4000ab0bb0) (0x4000786000) Stream removed, broadcasting: 5\n" Aug 17 21:44:17.534: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 21:44:17.535: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 21:44:17.543: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 17 21:44:27.566: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 17 21:44:27.566: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 21:44:27.623: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:44:27.625: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:44:27.625: INFO: ss-1 Pending [] Aug 17 21:44:27.625: INFO: Aug 17 21:44:27.626: INFO: StatefulSet ss has not reached scale 3, at 2 Aug 17 21:44:28.635: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980872088s Aug 17 21:44:29.731: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971316766s Aug 17 21:44:30.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.875566039s Aug 17 21:44:31.876: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.778636124s Aug 17 21:44:32.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.730738582s Aug 17 21:44:33.921: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.693480264s Aug 17 21:44:34.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.684962708s Aug 17 21:44:35.938: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.67633472s Aug 17 21:44:36.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 668.656646ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6052 Aug 17 21:44:37.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:44:39.409: INFO: stderr: "I0817 21:44:39.316419 218 log.go:172] (0x4000af4dc0) (0x40007141e0) Create stream\nI0817 21:44:39.320149 218 log.go:172] (0x4000af4dc0) (0x40007141e0) Stream added, broadcasting: 1\nI0817 21:44:39.333985 218 log.go:172] (0x4000af4dc0) Reply frame received for 1\nI0817 21:44:39.334603 218 log.go:172] (0x4000af4dc0) (0x40007b0000) Create stream\nI0817 21:44:39.334668 218 log.go:172] (0x4000af4dc0) (0x40007b0000) Stream added, broadcasting: 3\nI0817 21:44:39.336352 218 log.go:172] (0x4000af4dc0) Reply frame received for 3\nI0817 21:44:39.336635 218 log.go:172] (0x4000af4dc0) (0x40007bc000) Create stream\nI0817 21:44:39.336699 218 log.go:172] (0x4000af4dc0) (0x40007bc000) Stream added, broadcasting: 5\nI0817 21:44:39.337872 218 log.go:172] (0x4000af4dc0) Reply frame received for 5\nI0817 21:44:39.390217 218 log.go:172] (0x4000af4dc0) Data frame received for 3\nI0817 21:44:39.390785 218 log.go:172] (0x4000af4dc0) Data frame received for 5\nI0817 21:44:39.390901 218 log.go:172] (0x40007bc000) (5) Data frame handling\nI0817 21:44:39.390998 218 log.go:172] (0x4000af4dc0) Data frame received for 1\nI0817 21:44:39.391124 218 log.go:172] (0x40007141e0) (1) Data frame handling\nI0817 21:44:39.391359 218 log.go:172] (0x40007b0000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 21:44:39.392556 218 log.go:172] (0x40007bc000) (5) Data frame sent\nI0817 21:44:39.392712 218 log.go:172] (0x40007b0000) (3) Data frame sent\nI0817 21:44:39.393023 218 log.go:172] (0x40007141e0) (1) Data frame sent\nI0817 21:44:39.393130 218 log.go:172] (0x4000af4dc0) Data frame received for 5\nI0817 21:44:39.393212 218 log.go:172] (0x40007bc000) (5) Data frame handling\nI0817 21:44:39.393388 218 log.go:172] (0x4000af4dc0) Data frame received for 3\nI0817 21:44:39.393481 218 log.go:172] (0x40007b0000) (3) Data frame handling\nI0817 21:44:39.396065 218 log.go:172] (0x4000af4dc0) (0x40007141e0) Stream removed, broadcasting: 1\nI0817 21:44:39.396626 218 log.go:172] (0x4000af4dc0) Go away received\nI0817 21:44:39.399477 218 log.go:172] (0x4000af4dc0) (0x40007141e0) Stream removed, broadcasting: 1\nI0817 21:44:39.399775 218 log.go:172] (0x4000af4dc0) (0x40007b0000) Stream removed, broadcasting: 3\nI0817 21:44:39.399922 218 log.go:172] (0x4000af4dc0) (0x40007bc000) Stream removed, broadcasting: 5\n" Aug 17 21:44:39.410: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 21:44:39.410: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 21:44:39.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:44:40.864: INFO: stderr: "I0817 21:44:40.772375 241 log.go:172] (0x40009f89a0) (0x4000944000) Create stream\nI0817 21:44:40.774706 241 log.go:172] (0x40009f89a0) (0x4000944000) Stream added, broadcasting: 1\nI0817 21:44:40.783357 241 log.go:172] (0x40009f89a0) Reply frame received for 1\nI0817 21:44:40.783884 241 log.go:172] (0x40009f89a0) (0x4000a34000) Create stream\nI0817 21:44:40.783939 241 log.go:172] (0x40009f89a0) (0x4000a34000) Stream added, broadcasting: 3\nI0817 21:44:40.785947 241 log.go:172] (0x40009f89a0) Reply frame received for 3\nI0817 21:44:40.786639 241 log.go:172] (0x40009f89a0) (0x4000a340a0) Create stream\nI0817 21:44:40.786801 241 log.go:172] (0x40009f89a0) (0x4000a340a0) Stream added, broadcasting: 5\nI0817 21:44:40.789274 241 log.go:172] (0x40009f89a0) Reply frame received for 5\nI0817 21:44:40.846076 241 log.go:172] (0x40009f89a0) Data frame received for 5\nI0817 21:44:40.846474 241 log.go:172] (0x40009f89a0) Data frame received for 3\nI0817 21:44:40.846768 241 log.go:172] (0x40009f89a0) Data frame received for 1\nI0817 21:44:40.847139 241 log.go:172] (0x4000a34000) (3) Data frame handling\nI0817 21:44:40.847234 241 log.go:172] (0x4000944000) (1) Data frame handling\nI0817 21:44:40.847358 241 log.go:172] (0x4000a340a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0817 21:44:40.848638 241 log.go:172] (0x4000a340a0) (5) Data frame sent\nI0817 21:44:40.848811 241 log.go:172] (0x4000a34000) (3) Data frame sent\nI0817 21:44:40.849104 241 log.go:172] (0x4000944000) (1) Data frame sent\nI0817 21:44:40.849392 241 log.go:172] (0x40009f89a0) Data frame received for 5\nI0817 21:44:40.849545 241 log.go:172] (0x4000a340a0) (5) Data frame handling\nI0817 21:44:40.849728 241 log.go:172] (0x40009f89a0) Data frame received for 3\nI0817 21:44:40.849788 241 log.go:172] (0x4000a34000) (3) Data frame handling\nI0817 21:44:40.852531 241 log.go:172] (0x40009f89a0) (0x4000944000) Stream removed, broadcasting: 1\nI0817 21:44:40.853756 241 log.go:172] (0x40009f89a0) Go away received\nI0817 21:44:40.856094 241 log.go:172] (0x40009f89a0) (0x4000944000) Stream removed, broadcasting: 1\nI0817 21:44:40.856447 241 log.go:172] (0x40009f89a0) (0x4000a34000) Stream removed, broadcasting: 3\nI0817 21:44:40.856802 241 log.go:172] (0x40009f89a0) (0x4000a340a0) Stream removed, broadcasting: 5\n" Aug 17 21:44:40.865: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 21:44:40.865: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 21:44:40.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:44:42.773: INFO: stderr: "I0817 21:44:42.661013 264 log.go:172] (0x40001142c0) (0x4000a1a000) Create stream\nI0817 21:44:42.663552 264 log.go:172] (0x40001142c0) (0x4000a1a000) Stream added, broadcasting: 1\nI0817 21:44:42.679461 264 log.go:172] (0x40001142c0) Reply frame received for 1\nI0817 21:44:42.680179 264 log.go:172] (0x40001142c0) (0x4000a1a0a0) Create stream\nI0817 21:44:42.680249 264 log.go:172] (0x40001142c0) (0x4000a1a0a0) Stream added, broadcasting: 3\nI0817 21:44:42.681861 264 log.go:172] (0x40001142c0) Reply frame received for 3\nI0817 21:44:42.682109 264 log.go:172] (0x40001142c0) (0x4000ab0000) Create stream\nI0817 21:44:42.682188 264 log.go:172] (0x40001142c0) (0x4000ab0000) Stream added, broadcasting: 5\nI0817 21:44:42.683373 264 log.go:172] (0x40001142c0) Reply frame received for 5\nI0817 21:44:42.749390 264 log.go:172] (0x40001142c0) Data frame received for 5\nI0817 21:44:42.749622 264 log.go:172] (0x40001142c0) Data frame received for 3\nI0817 21:44:42.749726 264 log.go:172] (0x4000a1a0a0) (3) Data frame handling\nI0817 21:44:42.749783 264 log.go:172] (0x4000ab0000) (5) Data frame handling\nI0817 21:44:42.750326 264 log.go:172] (0x4000a1a0a0) (3) Data frame sent\nI0817 21:44:42.750394 264 log.go:172] (0x4000ab0000) (5) Data frame sent\nI0817 21:44:42.750584 264 log.go:172] (0x40001142c0) Data frame received for 1\nI0817 21:44:42.750692 264 log.go:172] (0x4000a1a000) (1) Data frame handling\nI0817 21:44:42.750814 264 log.go:172] (0x4000a1a000) (1) Data frame sent\nI0817 21:44:42.750884 264 log.go:172] (0x40001142c0) Data frame received for 3\nI0817 21:44:42.750955 264 log.go:172] (0x4000a1a0a0) (3) Data frame handling\nI0817 21:44:42.751142 264 log.go:172] (0x40001142c0) Data frame received for 5\nI0817 21:44:42.751208 264 log.go:172] (0x4000ab0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0817 21:44:42.752859 264 log.go:172] (0x40001142c0) (0x4000a1a000) Stream removed, broadcasting: 1\nI0817 21:44:42.756616 264 log.go:172] (0x40001142c0) Go away received\nI0817 21:44:42.758856 264 log.go:172] (0x40001142c0) (0x4000a1a000) Stream removed, broadcasting: 1\nI0817 21:44:42.759707 264 log.go:172] (0x40001142c0) (0x4000a1a0a0) Stream removed, broadcasting: 3\nI0817 21:44:42.760024 264 log.go:172] (0x40001142c0) (0x4000ab0000) Stream removed, broadcasting: 5\n" Aug 17 21:44:42.773: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 17 21:44:42.774: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 17 21:44:42.785: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 17 21:44:42.785: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 17 21:44:42.785: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 17 21:44:42.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 21:44:44.270: INFO: stderr: "I0817 21:44:44.122543 289 log.go:172] (0x4000558630) (0x4000988000) Create stream\nI0817 21:44:44.129054 289 log.go:172] (0x4000558630) (0x4000988000) Stream added, broadcasting: 1\nI0817 21:44:44.143431 289 log.go:172] (0x4000558630) Reply frame received for 1\nI0817 21:44:44.144399 289 log.go:172] (0x4000558630) (0x40008b5b80) Create stream\nI0817 21:44:44.144505 289 log.go:172] (0x4000558630) (0x40008b5b80) Stream added, broadcasting: 3\nI0817 21:44:44.146643 289 log.go:172] (0x4000558630) Reply frame received for 3\nI0817 21:44:44.147226 289 log.go:172] (0x4000558630) (0x4000988140) Create stream\nI0817 21:44:44.147345 289 log.go:172] (0x4000558630) (0x4000988140) Stream added, broadcasting: 5\nI0817 21:44:44.148956 289 log.go:172] (0x4000558630) Reply frame received for 5\nI0817 21:44:44.248635 289 log.go:172] (0x4000558630) Data frame received for 5\nI0817 21:44:44.249239 289 log.go:172] (0x4000558630) Data frame received for 3\nI0817 21:44:44.249638 289 log.go:172] (0x4000988140) (5) Data frame handling\nI0817 21:44:44.249915 289 log.go:172] (0x4000558630) Data frame received for 1\nI0817 21:44:44.250051 289 log.go:172] (0x4000988000) (1) Data frame handling\nI0817 21:44:44.250207 289 log.go:172] (0x40008b5b80) (3) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 21:44:44.251564 289 log.go:172] (0x4000988000) (1) Data frame sent\nI0817 21:44:44.251778 289 log.go:172] (0x40008b5b80) (3) Data frame sent\nI0817 21:44:44.251926 289 log.go:172] (0x4000988140) (5) Data frame sent\nI0817 21:44:44.252000 289 log.go:172] (0x4000558630) Data frame received for 3\nI0817 21:44:44.252095 289 log.go:172] (0x40008b5b80) (3) Data frame handling\nI0817 21:44:44.252149 289 log.go:172] (0x4000558630) Data frame received for 5\nI0817 21:44:44.252212 289 log.go:172] (0x4000988140) (5) Data frame handling\nI0817 21:44:44.254741 289 log.go:172] (0x4000558630) (0x4000988000) Stream removed, broadcasting: 1\nI0817 21:44:44.256278 289 log.go:172] (0x4000558630) Go away received\nI0817 21:44:44.259105 289 log.go:172] (0x4000558630) (0x4000988000) Stream removed, broadcasting: 1\nI0817 21:44:44.259604 289 log.go:172] (0x4000558630) (0x40008b5b80) Stream removed, broadcasting: 3\nI0817 21:44:44.259836 289 log.go:172] (0x4000558630) (0x4000988140) Stream removed, broadcasting: 5\n" Aug 17 21:44:44.271: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 21:44:44.271: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 21:44:44.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 21:44:45.728: INFO: stderr: "I0817 21:44:45.596033 314 log.go:172] (0x4000b02bb0) (0x4000817c20) Create stream\nI0817 21:44:45.599878 314 log.go:172] (0x4000b02bb0) (0x4000817c20) Stream added, broadcasting: 1\nI0817 21:44:45.611993 314 log.go:172] (0x4000b02bb0) Reply frame received for 1\nI0817 21:44:45.612622 314 log.go:172] (0x4000b02bb0) (0x4000706640) Create stream\nI0817 21:44:45.612685 314 log.go:172] (0x4000b02bb0) (0x4000706640) Stream added, broadcasting: 3\nI0817 21:44:45.614402 314 log.go:172] (0x4000b02bb0) Reply frame received for 3\nI0817 21:44:45.614884 314 log.go:172] (0x4000b02bb0) (0x4000b90000) Create stream\nI0817 21:44:45.614979 314 log.go:172] (0x4000b02bb0) (0x4000b90000) Stream added, broadcasting: 5\nI0817 21:44:45.616563 314 log.go:172] (0x4000b02bb0) Reply frame received for 5\nI0817 21:44:45.680841 314 log.go:172] (0x4000b02bb0) Data frame received for 5\nI0817 21:44:45.681113 314 log.go:172] (0x4000b90000) (5) Data frame handling\nI0817 21:44:45.681659 314 log.go:172] (0x4000b90000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 21:44:45.707469 314 log.go:172] (0x4000b02bb0) Data frame received for 5\nI0817 21:44:45.707672 314 log.go:172] (0x4000b02bb0) Data frame received for 3\nI0817 21:44:45.707947 314 log.go:172] (0x4000706640) (3) Data frame handling\nI0817 21:44:45.708098 314 log.go:172] (0x4000b90000) (5) Data frame handling\nI0817 21:44:45.708409 314 log.go:172] (0x4000706640) (3) Data frame sent\nI0817 21:44:45.708555 314 log.go:172] (0x4000b02bb0) Data frame received for 3\nI0817 21:44:45.708679 314 log.go:172] (0x4000706640) (3) Data frame handling\nI0817 21:44:45.709441 314 log.go:172] (0x4000b02bb0) Data frame received for 1\nI0817 21:44:45.709557 314 log.go:172] (0x4000817c20) (1) Data frame handling\nI0817 21:44:45.709654 314 log.go:172] (0x4000817c20) (1) Data frame sent\nI0817 21:44:45.711549 314 log.go:172] (0x4000b02bb0) (0x4000817c20) Stream removed, broadcasting: 1\nI0817 21:44:45.714832 314 log.go:172] (0x4000b02bb0) Go away received\nI0817 21:44:45.718456 314 log.go:172] (0x4000b02bb0) (0x4000817c20) Stream removed, broadcasting: 1\nI0817 21:44:45.718784 314 log.go:172] (0x4000b02bb0) (0x4000706640) Stream removed, broadcasting: 3\nI0817 21:44:45.719028 314 log.go:172] (0x4000b02bb0) (0x4000b90000) Stream removed, broadcasting: 5\n" Aug 17 21:44:45.729: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 21:44:45.730: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 21:44:45.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 17 21:44:47.262: INFO: stderr: "I0817 21:44:47.087341 337 log.go:172] (0x40001206e0) (0x40006ce0a0) Create stream\nI0817 21:44:47.091684 337 log.go:172] (0x40001206e0) (0x40006ce0a0) Stream added, broadcasting: 1\nI0817 21:44:47.103331 337 log.go:172] (0x40001206e0) Reply frame received for 1\nI0817 21:44:47.104077 337 log.go:172] (0x40001206e0) (0x40007e6000) Create stream\nI0817 21:44:47.104154 337 log.go:172] (0x40001206e0) (0x40007e6000) Stream added, broadcasting: 3\nI0817 21:44:47.105944 337 log.go:172] (0x40001206e0) Reply frame received for 3\nI0817 21:44:47.106272 337 log.go:172] (0x40001206e0) (0x40006ce140) Create stream\nI0817 21:44:47.106357 337 log.go:172] (0x40001206e0) (0x40006ce140) Stream added, broadcasting: 5\nI0817 21:44:47.107466 337 log.go:172] (0x40001206e0) Reply frame received for 5\nI0817 21:44:47.176668 337 log.go:172] (0x40001206e0) Data frame received for 5\nI0817 21:44:47.177031 337 log.go:172] (0x40006ce140) (5) Data frame handling\nI0817 21:44:47.177715 337 log.go:172] (0x40006ce140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 21:44:47.237573 337 log.go:172] (0x40001206e0) Data frame received for 3\nI0817 21:44:47.237677 337 log.go:172] (0x40007e6000) (3) Data frame handling\nI0817 21:44:47.237791 337 log.go:172] (0x40007e6000) (3) Data frame sent\nI0817 21:44:47.237888 337 log.go:172] (0x40001206e0) Data frame received for 3\nI0817 21:44:47.237954 337 log.go:172] (0x40007e6000) (3) Data frame handling\nI0817 21:44:47.238151 337 log.go:172] (0x40001206e0) Data frame received for 5\nI0817 21:44:47.238277 337 log.go:172] (0x40006ce140) (5) Data frame handling\nI0817 21:44:47.239607 337 log.go:172] (0x40001206e0) Data frame received for 1\nI0817 21:44:47.239679 337 log.go:172] (0x40006ce0a0) (1) Data frame handling\nI0817 21:44:47.239793 337 log.go:172] (0x40006ce0a0) (1) Data frame sent\nI0817 21:44:47.241731 337 log.go:172] (0x40001206e0) (0x40006ce0a0) Stream removed, broadcasting: 1\nI0817 21:44:47.244890 337 log.go:172] (0x40001206e0) Go away received\nI0817 21:44:47.248398 337 log.go:172] (0x40001206e0) (0x40006ce0a0) Stream removed, broadcasting: 1\nI0817 21:44:47.249062 337 log.go:172] (0x40001206e0) (0x40007e6000) Stream removed, broadcasting: 3\nI0817 21:44:47.249388 337 log.go:172] (0x40001206e0) (0x40006ce140) Stream removed, broadcasting: 5\n" Aug 17 21:44:47.263: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 17 21:44:47.263: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 17 21:44:47.263: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 21:44:47.268: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 17 21:44:57.282: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 17 21:44:57.282: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 17 21:44:57.282: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 17 21:44:57.317: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:44:57.317: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:44:57.318: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:44:57.318: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:44:57.319: INFO: Aug 17 21:44:57.319: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 21:44:58.327: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:44:58.328: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:44:58.328: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:44:58.329: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:44:58.329: INFO: Aug 17 21:44:58.329: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 21:44:59.397: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:44:59.397: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:44:59.397: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:44:59.397: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:44:59.397: INFO: Aug 17 21:44:59.398: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 21:45:00.406: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:45:00.406: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:45:00.407: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:00.407: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:00.407: INFO: Aug 17 21:45:00.407: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 21:45:01.415: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:45:01.415: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:45:01.415: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:01.415: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:01.415: INFO: Aug 17 21:45:01.415: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 21:45:02.424: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:45:02.424: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:45:02.424: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:02.424: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:02.425: INFO: Aug 17 21:45:02.425: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 21:45:03.432: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:45:03.432: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:45:03.432: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:03.432: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:03.433: INFO: Aug 17 21:45:03.433: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 21:45:04.441: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:45:04.441: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:45:04.441: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:04.442: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:04.442: INFO: Aug 17 21:45:04.442: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 21:45:05.450: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:45:05.451: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:45:05.451: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:05.451: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:05.452: INFO: Aug 17 21:45:05.452: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 17 21:45:06.488: INFO: POD NODE PHASE GRACE CONDITIONS Aug 17 21:45:06.488: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:05 +0000 UTC }] Aug 17 21:45:06.489: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:06.489: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-17 21:44:27 +0000 UTC }] Aug 17 21:45:06.490: INFO: Aug 17 21:45:06.490: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6052 Aug 17 21:45:07.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:45:09.092: INFO: rc: 1 Aug 17 21:45:09.094: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Aug 17 21:45:19.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:45:20.468: INFO: rc: 1 Aug 17 21:45:20.468: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:45:30.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:45:31.886: INFO: rc: 1 Aug 17 21:45:31.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:45:41.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:45:43.671: INFO: rc: 1 Aug 17 21:45:43.671: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:45:53.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:45:55.102: INFO: rc: 1 Aug 17 21:45:55.102: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:46:05.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:46:06.405: INFO: rc: 1 Aug 17 21:46:06.406: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:46:16.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:46:17.659: INFO: rc: 1 Aug 17 21:46:17.659: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:46:27.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:46:28.975: INFO: rc: 1 Aug 17 21:46:28.976: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:46:38.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:46:40.225: INFO: rc: 1 Aug 17 21:46:40.225: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:46:50.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:46:51.715: INFO: rc: 1 Aug 17 21:46:51.716: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:47:01.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:47:03.102: INFO: rc: 1 Aug 17 21:47:03.103: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:47:13.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:47:14.515: INFO: rc: 1 Aug 17 21:47:14.516: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:47:24.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:47:27.510: INFO: rc: 1 Aug 17 21:47:27.510: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:47:37.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:47:38.785: INFO: rc: 1 Aug 17 21:47:38.785: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:47:48.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:47:50.049: INFO: rc: 1 Aug 17 21:47:50.049: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:48:00.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:48:01.522: INFO: rc: 1 Aug 17 21:48:01.522: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:48:11.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:48:12.752: INFO: rc: 1 Aug 17 21:48:12.752: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:48:22.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:48:24.158: INFO: rc: 1 Aug 17 21:48:24.158: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:48:34.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:48:35.438: INFO: rc: 1 Aug 17 21:48:35.438: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:48:45.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:48:46.709: INFO: rc: 1 Aug 17 21:48:46.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:48:56.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:48:57.915: INFO: rc: 1 Aug 17 21:48:57.915: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:49:07.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:49:09.144: INFO: rc: 1 Aug 17 21:49:09.145: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:49:19.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:49:20.402: INFO: rc: 1 Aug 17 21:49:20.402: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:49:30.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:49:31.617: INFO: rc: 1 Aug 17 21:49:31.617: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:49:41.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:49:42.844: INFO: rc: 1 Aug 17 21:49:42.844: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:49:52.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:49:54.098: INFO: rc: 1 Aug 17 21:49:54.099: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:50:04.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:50:06.082: INFO: rc: 1 Aug 17 21:50:06.082: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 17 21:50:16.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6052 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 17 21:50:17.283: INFO: rc: 1 Aug 17 21:50:17.284: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Aug 17 21:50:17.284: INFO: Scaling statefulset ss to 0 Aug 17 21:50:17.335: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Aug 17 21:50:17.340: INFO: Deleting all statefulset in ns statefulset-6052 Aug 17 21:50:17.345: INFO: Scaling statefulset ss to 0 Aug 17 21:50:17.357: INFO: Waiting for statefulset status.replicas updated to 0 Aug 17 21:50:17.360: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:50:17.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6052" for this suite. • [SLOW TEST:371.915 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":24,"skipped":366,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:50:17.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl logs /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 STEP: creating an pod Aug 17 21:50:17.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-2298 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 17 21:50:18.777: INFO: stderr: "" Aug 17 21:50:18.777: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Aug 17 21:50:18.778: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 17 21:50:18.778: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2298" to be "running and ready, or succeeded" Aug 17 21:50:18.805: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 27.367411ms Aug 17 21:50:20.813: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035051936s Aug 17 21:50:22.907: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128671245s Aug 17 21:50:24.931: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.153404742s Aug 17 21:50:24.932: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 17 21:50:24.932: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 17 21:50:24.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2298' Aug 17 21:50:26.541: INFO: stderr: "" Aug 17 21:50:26.541: INFO: stdout: "I0817 21:50:22.420073 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/gcz 520\nI0817 21:50:22.620243 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/2rrc 469\nI0817 21:50:22.821581 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/v96 313\nI0817 21:50:23.020247 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/bngh 256\nI0817 21:50:23.220258 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/zjx 429\nI0817 21:50:23.420262 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/cld 453\nI0817 21:50:23.620283 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/88n 301\nI0817 21:50:23.820300 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/bg9 204\nI0817 21:50:24.020261 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/v55 546\nI0817 21:50:24.220312 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/vjs 383\nI0817 21:50:24.420268 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/c7t 519\nI0817 21:50:24.620255 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/tfk 548\nI0817 21:50:24.820288 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/fkn 485\nI0817 21:50:25.020234 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/h5dv 516\nI0817 21:50:25.220289 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/fw79 504\nI0817 21:50:25.420230 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/9h6x 223\nI0817 21:50:25.620233 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/jgf 471\nI0817 21:50:25.820217 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/59nw 591\nI0817 21:50:26.020257 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/vht7 260\nI0817 21:50:26.220234 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/knbt 354\nI0817 21:50:26.420251 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/bdk5 361\n" STEP: limiting log lines Aug 17 21:50:26.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2298 --tail=1' Aug 17 21:50:27.861: INFO: stderr: "" Aug 17 21:50:27.862: INFO: stdout: "I0817 21:50:27.820305 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/jtwj 434\n" Aug 17 21:50:27.862: INFO: got output "I0817 21:50:27.820305 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/jtwj 434\n" STEP: limiting log bytes Aug 17 21:50:27.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2298 --limit-bytes=1' Aug 17 21:50:29.118: INFO: stderr: "" Aug 17 21:50:29.118: INFO: stdout: "I" Aug 17 21:50:29.118: INFO: got output "I" STEP: exposing timestamps Aug 17 21:50:29.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2298 --tail=1 --timestamps' Aug 17 21:50:30.435: INFO: stderr: "" Aug 17 21:50:30.435: INFO: stdout: "2020-08-17T21:50:30.420467452Z I0817 21:50:30.420284 1 logs_generator.go:76] 40 PUT /api/v1/namespaces/default/pods/msl4 395\n" Aug 17 21:50:30.436: INFO: got output "2020-08-17T21:50:30.420467452Z I0817 21:50:30.420284 1 logs_generator.go:76] 40 PUT /api/v1/namespaces/default/pods/msl4 395\n" STEP: restricting to a time range Aug 17 21:50:32.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2298 --since=1s' Aug 17 21:50:34.227: INFO: stderr: "" Aug 17 21:50:34.227: INFO: stdout: "I0817 21:50:33.220260 1 logs_generator.go:76] 54 GET /api/v1/namespaces/kube-system/pods/sv8l 410\nI0817 21:50:33.420239 1 logs_generator.go:76] 55 GET /api/v1/namespaces/default/pods/m6hj 366\nI0817 21:50:33.620250 1 logs_generator.go:76] 56 GET /api/v1/namespaces/default/pods/sfk 468\nI0817 21:50:33.820200 1 logs_generator.go:76] 57 PUT /api/v1/namespaces/default/pods/dnx 580\nI0817 21:50:34.020217 1 logs_generator.go:76] 58 PUT /api/v1/namespaces/default/pods/vfpl 295\n" Aug 17 21:50:34.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2298 --since=24h' Aug 17 21:50:35.525: INFO: stderr: "" Aug 17 21:50:35.525: INFO: stdout: "I0817 21:50:22.420073 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/gcz 520\nI0817 21:50:22.620243 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/2rrc 469\nI0817 21:50:22.821581 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/v96 313\nI0817 21:50:23.020247 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/bngh 256\nI0817 21:50:23.220258 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/zjx 429\nI0817 21:50:23.420262 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/cld 453\nI0817 21:50:23.620283 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/88n 301\nI0817 21:50:23.820300 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/bg9 204\nI0817 21:50:24.020261 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/v55 546\nI0817 21:50:24.220312 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/vjs 383\nI0817 21:50:24.420268 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/c7t 519\nI0817 21:50:24.620255 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/tfk 548\nI0817 21:50:24.820288 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/fkn 485\nI0817 21:50:25.020234 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/h5dv 516\nI0817 21:50:25.220289 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/fw79 504\nI0817 21:50:25.420230 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/9h6x 223\nI0817 21:50:25.620233 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/jgf 471\nI0817 21:50:25.820217 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/59nw 591\nI0817 21:50:26.020257 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/vht7 260\nI0817 21:50:26.220234 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/knbt 354\nI0817 21:50:26.420251 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/bdk5 361\nI0817 21:50:26.620251 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/k7rd 430\nI0817 21:50:26.820293 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/sxg 353\nI0817 21:50:27.020244 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/zqz 294\nI0817 21:50:27.220217 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/w8dq 208\nI0817 21:50:27.420219 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/m8l 315\nI0817 21:50:27.620208 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/sb8 227\nI0817 21:50:27.820305 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/jtwj 434\nI0817 21:50:28.020245 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/52cc 217\nI0817 21:50:28.220192 1 logs_generator.go:76] 29 POST /api/v1/namespaces/default/pods/4d7z 442\nI0817 21:50:28.420212 1 logs_generator.go:76] 30 PUT /api/v1/namespaces/kube-system/pods/fjc 521\nI0817 21:50:28.620216 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/6hpz 489\nI0817 21:50:28.820220 1 logs_generator.go:76] 32 POST /api/v1/namespaces/default/pods/jkkt 513\nI0817 21:50:29.020205 1 logs_generator.go:76] 33 POST /api/v1/namespaces/kube-system/pods/jgq 209\nI0817 21:50:29.220290 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/kq8 284\nI0817 21:50:29.420285 1 logs_generator.go:76] 35 PUT /api/v1/namespaces/default/pods/stz 237\nI0817 21:50:29.620258 1 logs_generator.go:76] 36 PUT /api/v1/namespaces/kube-system/pods/d6x4 467\nI0817 21:50:29.820238 1 logs_generator.go:76] 37 PUT /api/v1/namespaces/ns/pods/48vt 472\nI0817 21:50:30.020201 1 logs_generator.go:76] 38 PUT /api/v1/namespaces/default/pods/fc8x 391\nI0817 21:50:30.220238 1 logs_generator.go:76] 39 GET /api/v1/namespaces/default/pods/twd 221\nI0817 21:50:30.420284 1 logs_generator.go:76] 40 PUT /api/v1/namespaces/default/pods/msl4 395\nI0817 21:50:30.620267 1 logs_generator.go:76] 41 GET /api/v1/namespaces/ns/pods/4vdz 570\nI0817 21:50:30.820254 1 logs_generator.go:76] 42 PUT /api/v1/namespaces/ns/pods/vmgm 398\nI0817 21:50:31.020276 1 logs_generator.go:76] 43 POST /api/v1/namespaces/kube-system/pods/msf9 244\nI0817 21:50:31.220357 1 logs_generator.go:76] 44 GET /api/v1/namespaces/kube-system/pods/pkz 526\nI0817 21:50:31.420262 1 logs_generator.go:76] 45 POST /api/v1/namespaces/kube-system/pods/962d 597\nI0817 21:50:31.620263 1 logs_generator.go:76] 46 GET /api/v1/namespaces/ns/pods/xgz 217\nI0817 21:50:31.820244 1 logs_generator.go:76] 47 PUT /api/v1/namespaces/default/pods/9x88 404\nI0817 21:50:32.020243 1 logs_generator.go:76] 48 GET /api/v1/namespaces/kube-system/pods/6lzh 227\nI0817 21:50:32.220268 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/default/pods/xns 351\nI0817 21:50:32.420227 1 logs_generator.go:76] 50 POST /api/v1/namespaces/ns/pods/bs5d 353\nI0817 21:50:32.620244 1 logs_generator.go:76] 51 POST /api/v1/namespaces/default/pods/jh7 333\nI0817 21:50:32.820221 1 logs_generator.go:76] 52 POST /api/v1/namespaces/default/pods/l57 538\nI0817 21:50:33.020229 1 logs_generator.go:76] 53 GET /api/v1/namespaces/default/pods/5s2 452\nI0817 21:50:33.220260 1 logs_generator.go:76] 54 GET /api/v1/namespaces/kube-system/pods/sv8l 410\nI0817 21:50:33.420239 1 logs_generator.go:76] 55 GET /api/v1/namespaces/default/pods/m6hj 366\nI0817 21:50:33.620250 1 logs_generator.go:76] 56 GET /api/v1/namespaces/default/pods/sfk 468\nI0817 21:50:33.820200 1 logs_generator.go:76] 57 PUT /api/v1/namespaces/default/pods/dnx 580\nI0817 21:50:34.020217 1 logs_generator.go:76] 58 PUT /api/v1/namespaces/default/pods/vfpl 295\nI0817 21:50:34.220278 1 logs_generator.go:76] 59 POST /api/v1/namespaces/ns/pods/vfn2 251\nI0817 21:50:34.420269 1 logs_generator.go:76] 60 POST /api/v1/namespaces/default/pods/wwn 447\nI0817 21:50:34.620226 1 logs_generator.go:76] 61 POST /api/v1/namespaces/default/pods/v79 508\nI0817 21:50:34.820248 1 logs_generator.go:76] 62 POST /api/v1/namespaces/kube-system/pods/r5d7 556\nI0817 21:50:35.020231 1 logs_generator.go:76] 63 GET /api/v1/namespaces/ns/pods/9gb 446\nI0817 21:50:35.220238 1 logs_generator.go:76] 64 PUT /api/v1/namespaces/ns/pods/svvn 248\nI0817 21:50:35.420286 1 logs_generator.go:76] 65 GET /api/v1/namespaces/default/pods/6bf 422\n" [AfterEach] Kubectl logs /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Aug 17 21:50:35.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2298' Aug 17 21:50:41.685: INFO: stderr: "" Aug 17 21:50:41.685: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:50:41.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2298" for this suite. • [SLOW TEST:24.327 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":25,"skipped":368,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:50:41.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 17 21:50:41.842: INFO: Waiting up to 5m0s for pod "pod-7774891a-2836-4ca3-98a6-b228a9636eb9" in namespace "emptydir-3115" to be "success or failure" Aug 17 21:50:41.900: INFO: Pod "pod-7774891a-2836-4ca3-98a6-b228a9636eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 57.835485ms Aug 17 21:50:43.905: INFO: Pod "pod-7774891a-2836-4ca3-98a6-b228a9636eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063175169s Aug 17 21:50:45.912: INFO: Pod "pod-7774891a-2836-4ca3-98a6-b228a9636eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069328495s Aug 17 21:50:48.033: INFO: Pod "pod-7774891a-2836-4ca3-98a6-b228a9636eb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190467593s STEP: Saw pod success Aug 17 21:50:48.033: INFO: Pod "pod-7774891a-2836-4ca3-98a6-b228a9636eb9" satisfied condition "success or failure" Aug 17 21:50:48.038: INFO: Trying to get logs from node jerma-worker pod pod-7774891a-2836-4ca3-98a6-b228a9636eb9 container test-container: STEP: delete the pod Aug 17 21:50:48.627: INFO: Waiting for pod pod-7774891a-2836-4ca3-98a6-b228a9636eb9 to disappear Aug 17 21:50:48.643: INFO: Pod pod-7774891a-2836-4ca3-98a6-b228a9636eb9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:50:48.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3115" for this suite. • [SLOW TEST:6.935 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":384,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:50:48.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Aug 17 21:51:03.366: INFO: 5 pods remaining Aug 17 21:51:03.367: INFO: 5 pods has nil DeletionTimestamp Aug 17 21:51:03.367: INFO: STEP: Gathering metrics W0817 21:51:07.423624 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 17 21:51:07.425: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 17 21:51:07.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6403" for this suite. • [SLOW TEST:19.648 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":27,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 17 21:51:08.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 17 21:51:09.479: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-f56b94cd-f6ed-4c0b-874c-38ff2d5e0ec5
STEP: Creating a pod to test consume secrets
Aug 17 21:51:13.421: INFO: Waiting up to 5m0s for pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f" in namespace "secrets-363" to be "success or failure"
Aug 17 21:51:13.473: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.615721ms
Aug 17 21:51:15.530: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108191707s
Aug 17 21:51:18.596: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.17450301s
Aug 17 21:51:20.656: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.233988522s
Aug 17 21:51:23.154: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.732456737s
Aug 17 21:51:25.480: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.058087094s
Aug 17 21:51:28.057: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.635564935s
Aug 17 21:51:30.668: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.246602797s
Aug 17 21:51:32.745: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.323616509s
Aug 17 21:51:35.196: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.773997024s
STEP: Saw pod success
Aug 17 21:51:35.196: INFO: Pod "pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f" satisfied condition "success or failure"
Aug 17 21:51:35.245: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f container secret-volume-test: 
STEP: delete the pod
Aug 17 21:51:36.507: INFO: Waiting for pod pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f to disappear
Aug 17 21:51:36.793: INFO: Pod pod-secrets-f228f755-5263-41f4-bdd0-8dd625fbd98f no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:51:36.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-363" for this suite.

• [SLOW TEST:24.008 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":436,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:51:37.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-3327
STEP: creating replication controller nodeport-test in namespace services-3327
I0817 21:51:39.245183       7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3327, replica count: 2
I0817 21:51:42.298456       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 21:51:45.300223       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 17 21:51:45.300: INFO: Creating new exec pod
Aug 17 21:51:52.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3327 execpod46ss9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 17 21:51:55.058: INFO: stderr: "I0817 21:51:54.958714    1173 log.go:172] (0x4000aa0a50) (0x40007b5e00) Create stream\nI0817 21:51:54.962175    1173 log.go:172] (0x4000aa0a50) (0x40007b5e00) Stream added, broadcasting: 1\nI0817 21:51:54.974874    1173 log.go:172] (0x4000aa0a50) Reply frame received for 1\nI0817 21:51:54.975520    1173 log.go:172] (0x4000aa0a50) (0x40007b5ea0) Create stream\nI0817 21:51:54.975618    1173 log.go:172] (0x4000aa0a50) (0x40007b5ea0) Stream added, broadcasting: 3\nI0817 21:51:54.977394    1173 log.go:172] (0x4000aa0a50) Reply frame received for 3\nI0817 21:51:54.977679    1173 log.go:172] (0x4000aa0a50) (0x4000c48140) Create stream\nI0817 21:51:54.977741    1173 log.go:172] (0x4000aa0a50) (0x4000c48140) Stream added, broadcasting: 5\nI0817 21:51:54.979145    1173 log.go:172] (0x4000aa0a50) Reply frame received for 5\nI0817 21:51:55.034541    1173 log.go:172] (0x4000aa0a50) Data frame received for 3\nI0817 21:51:55.034880    1173 log.go:172] (0x4000aa0a50) Data frame received for 5\nI0817 21:51:55.035134    1173 log.go:172] (0x40007b5ea0) (3) Data frame handling\nI0817 21:51:55.035241    1173 log.go:172] (0x4000aa0a50) Data frame received for 1\nI0817 21:51:55.035363    1173 log.go:172] (0x40007b5e00) (1) Data frame handling\nI0817 21:51:55.035493    1173 log.go:172] (0x4000c48140) (5) Data frame handling\nI0817 21:51:55.037298    1173 log.go:172] (0x4000c48140) (5) Data frame sent\nI0817 21:51:55.037679    1173 log.go:172] (0x40007b5e00) (1) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0817 21:51:55.038177    1173 log.go:172] (0x4000aa0a50) Data frame received for 5\nI0817 21:51:55.038250    1173 log.go:172] (0x4000c48140) (5) Data frame handling\nI0817 21:51:55.038332    1173 log.go:172] (0x4000c48140) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0817 21:51:55.038403    1173 log.go:172] (0x4000aa0a50) Data frame received for 5\nI0817 21:51:55.038454    1173 log.go:172] (0x4000c48140) (5) Data frame handling\nI0817 21:51:55.040092    1173 log.go:172] (0x4000aa0a50) (0x40007b5e00) Stream removed, broadcasting: 1\nI0817 21:51:55.042871    1173 log.go:172] (0x4000aa0a50) (0x40007b5e00) Stream removed, broadcasting: 1\nI0817 21:51:55.043155    1173 log.go:172] (0x4000aa0a50) (0x40007b5ea0) Stream removed, broadcasting: 3\nI0817 21:51:55.043929    1173 log.go:172] (0x4000aa0a50) (0x4000c48140) Stream removed, broadcasting: 5\nI0817 21:51:55.045220    1173 log.go:172] (0x4000aa0a50) Go away received\n"
Aug 17 21:51:55.059: INFO: stdout: ""
Aug 17 21:51:55.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3327 execpod46ss9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.79.214 80'
Aug 17 21:51:56.796: INFO: stderr: "I0817 21:51:56.683230    1195 log.go:172] (0x400094ea50) (0x4000709c20) Create stream\nI0817 21:51:56.687167    1195 log.go:172] (0x400094ea50) (0x4000709c20) Stream added, broadcasting: 1\nI0817 21:51:56.696574    1195 log.go:172] (0x400094ea50) Reply frame received for 1\nI0817 21:51:56.697479    1195 log.go:172] (0x400094ea50) (0x4000709cc0) Create stream\nI0817 21:51:56.697590    1195 log.go:172] (0x400094ea50) (0x4000709cc0) Stream added, broadcasting: 3\nI0817 21:51:56.699032    1195 log.go:172] (0x400094ea50) Reply frame received for 3\nI0817 21:51:56.699278    1195 log.go:172] (0x400094ea50) (0x4000812000) Create stream\nI0817 21:51:56.699342    1195 log.go:172] (0x400094ea50) (0x4000812000) Stream added, broadcasting: 5\nI0817 21:51:56.700495    1195 log.go:172] (0x400094ea50) Reply frame received for 5\nI0817 21:51:56.779253    1195 log.go:172] (0x400094ea50) Data frame received for 3\nI0817 21:51:56.779560    1195 log.go:172] (0x4000709cc0) (3) Data frame handling\nI0817 21:51:56.780214    1195 log.go:172] (0x400094ea50) Data frame received for 5\nI0817 21:51:56.780382    1195 log.go:172] (0x400094ea50) Data frame received for 1\nI0817 21:51:56.780485    1195 log.go:172] (0x4000709c20) (1) Data frame handling\nI0817 21:51:56.780596    1195 log.go:172] (0x4000812000) (5) Data frame handling\nI0817 21:51:56.781750    1195 log.go:172] (0x4000709c20) (1) Data frame sent\nI0817 21:51:56.781921    1195 log.go:172] (0x4000812000) (5) Data frame sent\nI0817 21:51:56.782026    1195 log.go:172] (0x400094ea50) Data frame received for 5\n+ nc -zv -t -w 2 10.96.79.214 80\nConnection to 10.96.79.214 80 port [tcp/http] succeeded!\nI0817 21:51:56.782083    1195 log.go:172] (0x4000812000) (5) Data frame handling\nI0817 21:51:56.783145    1195 log.go:172] (0x400094ea50) (0x4000709c20) Stream removed, broadcasting: 1\nI0817 21:51:56.785272    1195 log.go:172] (0x400094ea50) Go away received\nI0817 21:51:56.787669    1195 log.go:172] (0x400094ea50) (0x4000709c20) Stream removed, broadcasting: 1\nI0817 21:51:56.787910    1195 log.go:172] (0x400094ea50) (0x4000709cc0) Stream removed, broadcasting: 3\nI0817 21:51:56.788095    1195 log.go:172] (0x400094ea50) (0x4000812000) Stream removed, broadcasting: 5\n"
Aug 17 21:51:56.798: INFO: stdout: ""
Aug 17 21:51:56.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3327 execpod46ss9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30734'
Aug 17 21:51:58.304: INFO: stderr: "I0817 21:51:58.180953    1218 log.go:172] (0x4000ac8b00) (0x4000b16140) Create stream\nI0817 21:51:58.183459    1218 log.go:172] (0x4000ac8b00) (0x4000b16140) Stream added, broadcasting: 1\nI0817 21:51:58.194086    1218 log.go:172] (0x4000ac8b00) Reply frame received for 1\nI0817 21:51:58.194643    1218 log.go:172] (0x4000ac8b00) (0x40005874a0) Create stream\nI0817 21:51:58.194702    1218 log.go:172] (0x4000ac8b00) (0x40005874a0) Stream added, broadcasting: 3\nI0817 21:51:58.196286    1218 log.go:172] (0x4000ac8b00) Reply frame received for 3\nI0817 21:51:58.196715    1218 log.go:172] (0x4000ac8b00) (0x4000807ae0) Create stream\nI0817 21:51:58.196916    1218 log.go:172] (0x4000ac8b00) (0x4000807ae0) Stream added, broadcasting: 5\nI0817 21:51:58.198741    1218 log.go:172] (0x4000ac8b00) Reply frame received for 5\nI0817 21:51:58.281897    1218 log.go:172] (0x4000ac8b00) Data frame received for 3\nI0817 21:51:58.282310    1218 log.go:172] (0x4000ac8b00) Data frame received for 5\nI0817 21:51:58.282463    1218 log.go:172] (0x4000807ae0) (5) Data frame handling\nI0817 21:51:58.282610    1218 log.go:172] (0x4000ac8b00) Data frame received for 1\nI0817 21:51:58.282783    1218 log.go:172] (0x4000b16140) (1) Data frame handling\nI0817 21:51:58.282977    1218 log.go:172] (0x40005874a0) (3) Data frame handling\nI0817 21:51:58.284100    1218 log.go:172] (0x4000b16140) (1) Data frame sent\nI0817 21:51:58.284240    1218 log.go:172] (0x4000807ae0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 30734\nConnection to 172.18.0.6 30734 port [tcp/30734] succeeded!\nI0817 21:51:58.284920    1218 log.go:172] (0x4000ac8b00) Data frame received for 5\nI0817 21:51:58.285037    1218 log.go:172] (0x4000807ae0) (5) Data frame handling\nI0817 21:51:58.286324    1218 log.go:172] (0x4000ac8b00) (0x4000b16140) Stream removed, broadcasting: 1\nI0817 21:51:58.289578    1218 log.go:172] (0x4000ac8b00) Go away received\nI0817 21:51:58.293485    1218 log.go:172] (0x4000ac8b00) (0x4000b16140) Stream removed, broadcasting: 1\nI0817 21:51:58.293800    1218 log.go:172] (0x4000ac8b00) (0x40005874a0) Stream removed, broadcasting: 3\nI0817 21:51:58.293993    1218 log.go:172] (0x4000ac8b00) (0x4000807ae0) Stream removed, broadcasting: 5\n"
Aug 17 21:51:58.305: INFO: stdout: ""
Aug 17 21:51:58.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3327 execpod46ss9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 30734'
Aug 17 21:52:00.198: INFO: stderr: "I0817 21:52:00.099800    1240 log.go:172] (0x40003cc000) (0x40008dbb80) Create stream\nI0817 21:52:00.103863    1240 log.go:172] (0x40003cc000) (0x40008dbb80) Stream added, broadcasting: 1\nI0817 21:52:00.114751    1240 log.go:172] (0x40003cc000) Reply frame received for 1\nI0817 21:52:00.115388    1240 log.go:172] (0x40003cc000) (0x4000ce6000) Create stream\nI0817 21:52:00.115473    1240 log.go:172] (0x40003cc000) (0x4000ce6000) Stream added, broadcasting: 3\nI0817 21:52:00.117438    1240 log.go:172] (0x40003cc000) Reply frame received for 3\nI0817 21:52:00.117744    1240 log.go:172] (0x40003cc000) (0x40008dbd60) Create stream\nI0817 21:52:00.117805    1240 log.go:172] (0x40003cc000) (0x40008dbd60) Stream added, broadcasting: 5\nI0817 21:52:00.119241    1240 log.go:172] (0x40003cc000) Reply frame received for 5\nI0817 21:52:00.177687    1240 log.go:172] (0x40003cc000) Data frame received for 3\nI0817 21:52:00.178220    1240 log.go:172] (0x40003cc000) Data frame received for 5\nI0817 21:52:00.178378    1240 log.go:172] (0x40008dbd60) (5) Data frame handling\nI0817 21:52:00.178574    1240 log.go:172] (0x40003cc000) Data frame received for 1\nI0817 21:52:00.178707    1240 log.go:172] (0x40008dbb80) (1) Data frame handling\nI0817 21:52:00.178943    1240 log.go:172] (0x4000ce6000) (3) Data frame handling\nI0817 21:52:00.179984    1240 log.go:172] (0x40008dbb80) (1) Data frame sent\nI0817 21:52:00.180805    1240 log.go:172] (0x40008dbd60) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 30734\nConnection to 172.18.0.3 30734 port [tcp/30734] succeeded!\nI0817 21:52:00.181588    1240 log.go:172] (0x40003cc000) Data frame received for 5\nI0817 21:52:00.181710    1240 log.go:172] (0x40008dbd60) (5) Data frame handling\nI0817 21:52:00.182498    1240 log.go:172] (0x40003cc000) (0x40008dbb80) Stream removed, broadcasting: 1\nI0817 21:52:00.184616    1240 log.go:172] (0x40003cc000) Go away received\nI0817 21:52:00.187804    1240 log.go:172] (0x40003cc000) (0x40008dbb80) Stream removed, broadcasting: 1\nI0817 21:52:00.188174    1240 log.go:172] (0x40003cc000) (0x4000ce6000) Stream removed, broadcasting: 3\nI0817 21:52:00.188431    1240 log.go:172] (0x40003cc000) (0x40008dbd60) Stream removed, broadcasting: 5\n"
Aug 17 21:52:00.199: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:52:00.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3327" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.344 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":30,"skipped":443,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:52:00.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-5f71150b-760e-4a6d-a19a-ec3a6a76fdd9
STEP: Creating configMap with name cm-test-opt-upd-378b3fa1-6dac-48c5-99cd-9620a82d04b2
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5f71150b-760e-4a6d-a19a-ec3a6a76fdd9
STEP: Updating configmap cm-test-opt-upd-378b3fa1-6dac-48c5-99cd-9620a82d04b2
STEP: Creating configMap with name cm-test-opt-create-c048e59f-a93d-4fe6-bb04-4bf824552a0f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:53:28.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6251" for this suite.

• [SLOW TEST:88.745 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":486,"failed":0}
SS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:53:29.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-e448f0fb-2810-4b45-9a4a-5c78d196eb7f
STEP: Creating secret with name secret-projected-all-test-volume-adb8f738-7a05-4a84-acc3-1c3590b3e08a
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 17 21:53:31.045: INFO: Waiting up to 5m0s for pod "projected-volume-937f2422-7cae-4118-ad67-edd1301865c5" in namespace "projected-1554" to be "success or failure"
Aug 17 21:53:31.657: INFO: Pod "projected-volume-937f2422-7cae-4118-ad67-edd1301865c5": Phase="Pending", Reason="", readiness=false. Elapsed: 611.290727ms
Aug 17 21:53:33.664: INFO: Pod "projected-volume-937f2422-7cae-4118-ad67-edd1301865c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617977896s
Aug 17 21:53:36.341: INFO: Pod "projected-volume-937f2422-7cae-4118-ad67-edd1301865c5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.295290615s
Aug 17 21:53:38.415: INFO: Pod "projected-volume-937f2422-7cae-4118-ad67-edd1301865c5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.369561345s
Aug 17 21:53:41.041: INFO: Pod "projected-volume-937f2422-7cae-4118-ad67-edd1301865c5": Phase="Running", Reason="", readiness=true. Elapsed: 9.995483925s
Aug 17 21:53:43.604: INFO: Pod "projected-volume-937f2422-7cae-4118-ad67-edd1301865c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.558588809s
STEP: Saw pod success
Aug 17 21:53:43.604: INFO: Pod "projected-volume-937f2422-7cae-4118-ad67-edd1301865c5" satisfied condition "success or failure"
Aug 17 21:53:44.290: INFO: Trying to get logs from node jerma-worker pod projected-volume-937f2422-7cae-4118-ad67-edd1301865c5 container projected-all-volume-test: 
STEP: delete the pod
Aug 17 21:53:45.783: INFO: Waiting for pod projected-volume-937f2422-7cae-4118-ad67-edd1301865c5 to disappear
Aug 17 21:53:45.797: INFO: Pod projected-volume-937f2422-7cae-4118-ad67-edd1301865c5 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:53:45.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1554" for this suite.

• [SLOW TEST:17.455 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":32,"skipped":488,"failed":0}
S
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:53:46.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 17 21:53:48.476: INFO: Created pod &Pod{ObjectMeta:{dns-5738  dns-5738 /api/v1/namespaces/dns-5738/pods/dns-5738 9964557b-4e6e-475e-8fd7-08c3cf1efe60 875214 0 2020-08-17 21:53:48 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5wbg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5wbg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5wbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 17 21:53:56.816: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5738 PodName:dns-5738 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 21:53:56.816: INFO: >>> kubeConfig: /root/.kube/config
I0817 21:53:56.882729       7 log.go:172] (0x4002eb22c0) (0x4002f0d360) Create stream
I0817 21:53:56.883112       7 log.go:172] (0x4002eb22c0) (0x4002f0d360) Stream added, broadcasting: 1
I0817 21:53:56.910253       7 log.go:172] (0x4002eb22c0) Reply frame received for 1
I0817 21:53:56.910820       7 log.go:172] (0x4002eb22c0) (0x40024b6000) Create stream
I0817 21:53:56.910894       7 log.go:172] (0x4002eb22c0) (0x40024b6000) Stream added, broadcasting: 3
I0817 21:53:56.914995       7 log.go:172] (0x4002eb22c0) Reply frame received for 3
I0817 21:53:56.915302       7 log.go:172] (0x4002eb22c0) (0x4002f0d400) Create stream
I0817 21:53:56.915383       7 log.go:172] (0x4002eb22c0) (0x4002f0d400) Stream added, broadcasting: 5
I0817 21:53:56.916919       7 log.go:172] (0x4002eb22c0) Reply frame received for 5
I0817 21:53:56.973497       7 log.go:172] (0x4002eb22c0) Data frame received for 3
I0817 21:53:56.973932       7 log.go:172] (0x40024b6000) (3) Data frame handling
I0817 21:53:56.974182       7 log.go:172] (0x4002eb22c0) Data frame received for 5
I0817 21:53:56.974342       7 log.go:172] (0x4002f0d400) (5) Data frame handling
I0817 21:53:56.976140       7 log.go:172] (0x4002eb22c0) Data frame received for 1
I0817 21:53:56.976302       7 log.go:172] (0x4002f0d360) (1) Data frame handling
I0817 21:53:56.976933       7 log.go:172] (0x4002f0d360) (1) Data frame sent
I0817 21:53:56.977745       7 log.go:172] (0x40024b6000) (3) Data frame sent
I0817 21:53:56.977868       7 log.go:172] (0x4002eb22c0) Data frame received for 3
I0817 21:53:56.978444       7 log.go:172] (0x4002eb22c0) (0x4002f0d360) Stream removed, broadcasting: 1
I0817 21:53:56.979576       7 log.go:172] (0x40024b6000) (3) Data frame handling
I0817 21:53:56.981067       7 log.go:172] (0x4002eb22c0) Go away received
I0817 21:53:56.983146       7 log.go:172] (0x4002eb22c0) (0x4002f0d360) Stream removed, broadcasting: 1
I0817 21:53:56.983677       7 log.go:172] (0x4002eb22c0) (0x40024b6000) Stream removed, broadcasting: 3
I0817 21:53:56.983965       7 log.go:172] (0x4002eb22c0) (0x4002f0d400) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 17 21:53:56.985: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5738 PodName:dns-5738 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 21:53:56.985: INFO: >>> kubeConfig: /root/.kube/config
I0817 21:53:57.049233       7 log.go:172] (0x4002fe7e40) (0x4002489680) Create stream
I0817 21:53:57.049367       7 log.go:172] (0x4002fe7e40) (0x4002489680) Stream added, broadcasting: 1
I0817 21:53:57.052465       7 log.go:172] (0x4002fe7e40) Reply frame received for 1
I0817 21:53:57.052681       7 log.go:172] (0x4002fe7e40) (0x400319c0a0) Create stream
I0817 21:53:57.052787       7 log.go:172] (0x4002fe7e40) (0x400319c0a0) Stream added, broadcasting: 3
I0817 21:53:57.054296       7 log.go:172] (0x4002fe7e40) Reply frame received for 3
I0817 21:53:57.054451       7 log.go:172] (0x4002fe7e40) (0x40024b60a0) Create stream
I0817 21:53:57.054532       7 log.go:172] (0x4002fe7e40) (0x40024b60a0) Stream added, broadcasting: 5
I0817 21:53:57.056057       7 log.go:172] (0x4002fe7e40) Reply frame received for 5
I0817 21:53:57.120649       7 log.go:172] (0x4002fe7e40) Data frame received for 3
I0817 21:53:57.121026       7 log.go:172] (0x400319c0a0) (3) Data frame handling
I0817 21:53:57.121190       7 log.go:172] (0x400319c0a0) (3) Data frame sent
I0817 21:53:57.125018       7 log.go:172] (0x4002fe7e40) Data frame received for 5
I0817 21:53:57.125269       7 log.go:172] (0x40024b60a0) (5) Data frame handling
I0817 21:53:57.125751       7 log.go:172] (0x4002fe7e40) Data frame received for 3
I0817 21:53:57.125883       7 log.go:172] (0x400319c0a0) (3) Data frame handling
I0817 21:53:57.128388       7 log.go:172] (0x4002fe7e40) Data frame received for 1
I0817 21:53:57.128480       7 log.go:172] (0x4002489680) (1) Data frame handling
I0817 21:53:57.128586       7 log.go:172] (0x4002489680) (1) Data frame sent
I0817 21:53:57.128709       7 log.go:172] (0x4002fe7e40) (0x4002489680) Stream removed, broadcasting: 1
I0817 21:53:57.128953       7 log.go:172] (0x4002fe7e40) Go away received
I0817 21:53:57.129240       7 log.go:172] (0x4002fe7e40) (0x4002489680) Stream removed, broadcasting: 1
I0817 21:53:57.129351       7 log.go:172] (0x4002fe7e40) (0x400319c0a0) Stream removed, broadcasting: 3
I0817 21:53:57.129425       7 log.go:172] (0x4002fe7e40) (0x40024b60a0) Stream removed, broadcasting: 5
Aug 17 21:53:57.129: INFO: Deleting pod dns-5738...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:53:57.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5738" for this suite.

• [SLOW TEST:10.768 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":33,"skipped":489,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:53:57.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 21:53:59.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9819
I0817 21:53:59.153815       7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9819, replica count: 1
I0817 21:54:00.205139       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 21:54:01.205760       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 21:54:02.206430       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 21:54:03.207079       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 21:54:04.207630       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 21:54:05.208391       7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 17 21:54:05.565: INFO: Created: latency-svc-jz9zt
Aug 17 21:54:05.726: INFO: Got endpoints: latency-svc-jz9zt [415.639439ms]
Aug 17 21:54:05.951: INFO: Created: latency-svc-hjlqp
Aug 17 21:54:06.015: INFO: Got endpoints: latency-svc-hjlqp [287.597171ms]
Aug 17 21:54:06.015: INFO: Created: latency-svc-b8th9
Aug 17 21:54:06.233: INFO: Got endpoints: latency-svc-b8th9 [505.688894ms]
Aug 17 21:54:06.237: INFO: Created: latency-svc-m8b74
Aug 17 21:54:06.266: INFO: Got endpoints: latency-svc-m8b74 [539.146387ms]
Aug 17 21:54:06.586: INFO: Created: latency-svc-wxqgb
Aug 17 21:54:06.590: INFO: Got endpoints: latency-svc-wxqgb [863.173518ms]
Aug 17 21:54:06.872: INFO: Created: latency-svc-xsm96
Aug 17 21:54:06.890: INFO: Got endpoints: latency-svc-xsm96 [1.162940447s]
Aug 17 21:54:07.439: INFO: Created: latency-svc-rmwb6
Aug 17 21:54:07.520: INFO: Got endpoints: latency-svc-rmwb6 [1.792075388s]
Aug 17 21:54:07.599: INFO: Created: latency-svc-9sh5v
Aug 17 21:54:07.609: INFO: Got endpoints: latency-svc-9sh5v [1.881268853s]
Aug 17 21:54:07.742: INFO: Created: latency-svc-nqnct
Aug 17 21:54:07.791: INFO: Created: latency-svc-sm7ms
Aug 17 21:54:07.791: INFO: Got endpoints: latency-svc-nqnct [2.064198232s]
Aug 17 21:54:07.802: INFO: Got endpoints: latency-svc-sm7ms [2.072441956s]
Aug 17 21:54:07.839: INFO: Created: latency-svc-jxf8m
Aug 17 21:54:07.913: INFO: Got endpoints: latency-svc-jxf8m [2.184925489s]
Aug 17 21:54:07.948: INFO: Created: latency-svc-6xlrf
Aug 17 21:54:07.957: INFO: Got endpoints: latency-svc-6xlrf [2.22885171s]
Aug 17 21:54:07.983: INFO: Created: latency-svc-9vzzh
Aug 17 21:54:08.070: INFO: Got endpoints: latency-svc-9vzzh [2.342686349s]
Aug 17 21:54:08.073: INFO: Created: latency-svc-7w74s
Aug 17 21:54:08.090: INFO: Got endpoints: latency-svc-7w74s [2.361755968s]
Aug 17 21:54:08.121: INFO: Created: latency-svc-dqxbh
Aug 17 21:54:08.138: INFO: Got endpoints: latency-svc-dqxbh [2.410564764s]
Aug 17 21:54:08.238: INFO: Created: latency-svc-xw2gk
Aug 17 21:54:08.240: INFO: Got endpoints: latency-svc-xw2gk [2.513210264s]
Aug 17 21:54:08.277: INFO: Created: latency-svc-xthrs
Aug 17 21:54:08.301: INFO: Got endpoints: latency-svc-xthrs [2.286045711s]
Aug 17 21:54:08.915: INFO: Created: latency-svc-ds6bc
Aug 17 21:54:08.918: INFO: Got endpoints: latency-svc-ds6bc [2.684704808s]
Aug 17 21:54:09.149: INFO: Created: latency-svc-4ppz5
Aug 17 21:54:09.199: INFO: Got endpoints: latency-svc-4ppz5 [2.932967656s]
Aug 17 21:54:09.810: INFO: Created: latency-svc-j666z
Aug 17 21:54:09.852: INFO: Got endpoints: latency-svc-j666z [3.261624308s]
Aug 17 21:54:10.193: INFO: Created: latency-svc-9vxts
Aug 17 21:54:10.376: INFO: Got endpoints: latency-svc-9vxts [3.48539074s]
Aug 17 21:54:10.429: INFO: Created: latency-svc-cbt7q
Aug 17 21:54:10.730: INFO: Got endpoints: latency-svc-cbt7q [3.209537273s]
Aug 17 21:54:11.284: INFO: Created: latency-svc-d6xh9
Aug 17 21:54:11.303: INFO: Got endpoints: latency-svc-d6xh9 [3.69394725s]
Aug 17 21:54:11.682: INFO: Created: latency-svc-rflsm
Aug 17 21:54:11.690: INFO: Got endpoints: latency-svc-rflsm [3.898196971s]
Aug 17 21:54:12.023: INFO: Created: latency-svc-vfn75
Aug 17 21:54:12.072: INFO: Got endpoints: latency-svc-vfn75 [4.269552946s]
Aug 17 21:54:12.233: INFO: Created: latency-svc-cstg6
Aug 17 21:54:12.268: INFO: Got endpoints: latency-svc-cstg6 [4.355134403s]
Aug 17 21:54:13.192: INFO: Created: latency-svc-w86xd
Aug 17 21:54:13.215: INFO: Got endpoints: latency-svc-w86xd [5.257352753s]
Aug 17 21:54:13.766: INFO: Created: latency-svc-czqrl
Aug 17 21:54:13.770: INFO: Got endpoints: latency-svc-czqrl [5.699249147s]
Aug 17 21:54:14.018: INFO: Created: latency-svc-jp4lt
Aug 17 21:54:14.042: INFO: Got endpoints: latency-svc-jp4lt [5.952149594s]
Aug 17 21:54:14.238: INFO: Created: latency-svc-gh7fw
Aug 17 21:54:14.243: INFO: Got endpoints: latency-svc-gh7fw [6.104397322s]
Aug 17 21:54:14.435: INFO: Created: latency-svc-xhcgn
Aug 17 21:54:14.494: INFO: Got endpoints: latency-svc-xhcgn [6.253407106s]
Aug 17 21:54:14.862: INFO: Created: latency-svc-6h5zn
Aug 17 21:54:14.867: INFO: Got endpoints: latency-svc-6h5zn [6.565935069s]
Aug 17 21:54:15.834: INFO: Created: latency-svc-q7svl
Aug 17 21:54:15.906: INFO: Got endpoints: latency-svc-q7svl [6.987714902s]
Aug 17 21:54:16.120: INFO: Created: latency-svc-wdx4j
Aug 17 21:54:16.245: INFO: Got endpoints: latency-svc-wdx4j [7.04523309s]
Aug 17 21:54:16.272: INFO: Created: latency-svc-68s5f
Aug 17 21:54:16.307: INFO: Got endpoints: latency-svc-68s5f [6.454171798s]
Aug 17 21:54:16.559: INFO: Created: latency-svc-zgkwg
Aug 17 21:54:16.743: INFO: Got endpoints: latency-svc-zgkwg [6.367123894s]
Aug 17 21:54:17.707: INFO: Created: latency-svc-xwtpk
Aug 17 21:54:18.474: INFO: Got endpoints: latency-svc-xwtpk [7.744565789s]
Aug 17 21:54:18.887: INFO: Created: latency-svc-45cvh
Aug 17 21:54:19.240: INFO: Got endpoints: latency-svc-45cvh [7.936537816s]
Aug 17 21:54:19.536: INFO: Created: latency-svc-glj8r
Aug 17 21:54:19.910: INFO: Got endpoints: latency-svc-glj8r [8.220285918s]
Aug 17 21:54:20.280: INFO: Created: latency-svc-rfsvn
Aug 17 21:54:20.672: INFO: Got endpoints: latency-svc-rfsvn [8.599605717s]
Aug 17 21:54:20.824: INFO: Created: latency-svc-89q85
Aug 17 21:54:20.912: INFO: Got endpoints: latency-svc-89q85 [8.643033523s]
Aug 17 21:54:21.027: INFO: Created: latency-svc-x5vft
Aug 17 21:54:21.566: INFO: Got endpoints: latency-svc-x5vft [8.350669539s]
Aug 17 21:54:21.933: INFO: Created: latency-svc-2k97n
Aug 17 21:54:22.275: INFO: Got endpoints: latency-svc-2k97n [8.505335232s]
Aug 17 21:54:22.558: INFO: Created: latency-svc-t625q
Aug 17 21:54:22.562: INFO: Got endpoints: latency-svc-t625q [8.519665321s]
Aug 17 21:54:22.740: INFO: Created: latency-svc-d7nvl
Aug 17 21:54:22.969: INFO: Got endpoints: latency-svc-d7nvl [8.726563619s]
Aug 17 21:54:22.972: INFO: Created: latency-svc-gjp6p
Aug 17 21:54:23.012: INFO: Got endpoints: latency-svc-gjp6p [8.517413416s]
Aug 17 21:54:23.670: INFO: Created: latency-svc-tkl9z
Aug 17 21:54:23.714: INFO: Got endpoints: latency-svc-tkl9z [8.846201194s]
Aug 17 21:54:24.113: INFO: Created: latency-svc-6hfct
Aug 17 21:54:24.439: INFO: Got endpoints: latency-svc-6hfct [8.53300149s]
Aug 17 21:54:25.059: INFO: Created: latency-svc-qc2jx
Aug 17 21:54:25.063: INFO: Got endpoints: latency-svc-qc2jx [8.818133992s]
Aug 17 21:54:25.379: INFO: Created: latency-svc-sd8cs
Aug 17 21:54:25.590: INFO: Got endpoints: latency-svc-sd8cs [9.282678214s]
Aug 17 21:54:25.645: INFO: Created: latency-svc-gxbll
Aug 17 21:54:25.671: INFO: Got endpoints: latency-svc-gxbll [8.927716825s]
Aug 17 21:54:25.790: INFO: Created: latency-svc-87kj5
Aug 17 21:54:25.858: INFO: Got endpoints: latency-svc-87kj5 [7.38289731s]
Aug 17 21:54:26.010: INFO: Created: latency-svc-j5p7r
Aug 17 21:54:26.432: INFO: Got endpoints: latency-svc-j5p7r [7.191866789s]
Aug 17 21:54:27.233: INFO: Created: latency-svc-nkprp
Aug 17 21:54:27.279: INFO: Got endpoints: latency-svc-nkprp [7.368759856s]
Aug 17 21:54:27.850: INFO: Created: latency-svc-dvbq4
Aug 17 21:54:27.909: INFO: Got endpoints: latency-svc-dvbq4 [7.236632705s]
Aug 17 21:54:28.778: INFO: Created: latency-svc-zkj82
Aug 17 21:54:28.793: INFO: Got endpoints: latency-svc-zkj82 [7.880859913s]
Aug 17 21:54:28.796: INFO: Created: latency-svc-d9ptm
Aug 17 21:54:28.830: INFO: Got endpoints: latency-svc-d9ptm [7.263938352s]
Aug 17 21:54:28.973: INFO: Created: latency-svc-r2ttd
Aug 17 21:54:28.987: INFO: Got endpoints: latency-svc-r2ttd [6.711503982s]
Aug 17 21:54:29.011: INFO: Created: latency-svc-ffhj9
Aug 17 21:54:29.022: INFO: Got endpoints: latency-svc-ffhj9 [6.459790348s]
Aug 17 21:54:29.173: INFO: Created: latency-svc-p9f6d
Aug 17 21:54:29.176: INFO: Got endpoints: latency-svc-p9f6d [6.206264088s]
Aug 17 21:54:29.329: INFO: Created: latency-svc-p7q4x
Aug 17 21:54:29.334: INFO: Got endpoints: latency-svc-p7q4x [6.322181112s]
Aug 17 21:54:29.765: INFO: Created: latency-svc-bd9k2
Aug 17 21:54:30.026: INFO: Got endpoints: latency-svc-bd9k2 [6.311721555s]
Aug 17 21:54:30.026: INFO: Created: latency-svc-pxgpp
Aug 17 21:54:30.056: INFO: Got endpoints: latency-svc-pxgpp [5.616257959s]
Aug 17 21:54:30.095: INFO: Created: latency-svc-skc5x
Aug 17 21:54:30.120: INFO: Got endpoints: latency-svc-skc5x [5.056790338s]
Aug 17 21:54:30.305: INFO: Created: latency-svc-bjszm
Aug 17 21:54:30.317: INFO: Got endpoints: latency-svc-bjszm [4.727187932s]
Aug 17 21:54:30.389: INFO: Created: latency-svc-6nmx4
Aug 17 21:54:30.514: INFO: Got endpoints: latency-svc-6nmx4 [4.842359701s]
Aug 17 21:54:30.516: INFO: Created: latency-svc-4vg2d
Aug 17 21:54:30.535: INFO: Got endpoints: latency-svc-4vg2d [4.677182825s]
Aug 17 21:54:30.945: INFO: Created: latency-svc-xc22q
Aug 17 21:54:31.161: INFO: Got endpoints: latency-svc-xc22q [4.729181887s]
Aug 17 21:54:31.219: INFO: Created: latency-svc-p6g9z
Aug 17 21:54:31.378: INFO: Got endpoints: latency-svc-p6g9z [4.098220627s]
Aug 17 21:54:31.682: INFO: Created: latency-svc-j6jkc
Aug 17 21:54:31.739: INFO: Got endpoints: latency-svc-j6jkc [3.830408254s]
Aug 17 21:54:32.002: INFO: Created: latency-svc-hc726
Aug 17 21:54:32.759: INFO: Got endpoints: latency-svc-hc726 [3.966471638s]
Aug 17 21:54:32.769: INFO: Created: latency-svc-n7d7k
Aug 17 21:54:32.842: INFO: Got endpoints: latency-svc-n7d7k [4.011641268s]
Aug 17 21:54:33.307: INFO: Created: latency-svc-68xpr
Aug 17 21:54:33.878: INFO: Got endpoints: latency-svc-68xpr [4.891323181s]
Aug 17 21:54:34.263: INFO: Created: latency-svc-jdqns
Aug 17 21:54:34.265: INFO: Got endpoints: latency-svc-jdqns [5.242991581s]
Aug 17 21:54:35.030: INFO: Created: latency-svc-hvzns
Aug 17 21:54:35.430: INFO: Got endpoints: latency-svc-hvzns [6.254403667s]
Aug 17 21:54:35.434: INFO: Created: latency-svc-94gtn
Aug 17 21:54:35.468: INFO: Got endpoints: latency-svc-94gtn [6.133939867s]
Aug 17 21:54:35.748: INFO: Created: latency-svc-855vb
Aug 17 21:54:35.778: INFO: Got endpoints: latency-svc-855vb [5.752123656s]
Aug 17 21:54:35.938: INFO: Created: latency-svc-9gkdl
Aug 17 21:54:35.989: INFO: Got endpoints: latency-svc-9gkdl [5.932551747s]
Aug 17 21:54:36.214: INFO: Created: latency-svc-87fgr
Aug 17 21:54:36.299: INFO: Got endpoints: latency-svc-87fgr [6.1782148s]
Aug 17 21:54:36.956: INFO: Created: latency-svc-z2wb6
Aug 17 21:54:36.995: INFO: Got endpoints: latency-svc-z2wb6 [6.678047842s]
Aug 17 21:54:37.345: INFO: Created: latency-svc-r6wc9
Aug 17 21:54:37.401: INFO: Got endpoints: latency-svc-r6wc9 [6.887362152s]
Aug 17 21:54:38.039: INFO: Created: latency-svc-htz88
Aug 17 21:54:38.336: INFO: Got endpoints: latency-svc-htz88 [7.800938783s]
Aug 17 21:54:38.367: INFO: Created: latency-svc-xlrzj
Aug 17 21:54:38.562: INFO: Got endpoints: latency-svc-xlrzj [7.400072934s]
Aug 17 21:54:38.750: INFO: Created: latency-svc-nwm7j
Aug 17 21:54:38.776: INFO: Got endpoints: latency-svc-nwm7j [7.398243281s]
Aug 17 21:54:38.778: INFO: Created: latency-svc-kqftb
Aug 17 21:54:38.799: INFO: Got endpoints: latency-svc-kqftb [7.059476157s]
Aug 17 21:54:38.909: INFO: Created: latency-svc-w2zjf
Aug 17 21:54:38.911: INFO: Got endpoints: latency-svc-w2zjf [6.151826033s]
Aug 17 21:54:38.980: INFO: Created: latency-svc-889d7
Aug 17 21:54:38.997: INFO: Got endpoints: latency-svc-889d7 [6.154671153s]
Aug 17 21:54:39.065: INFO: Created: latency-svc-2drzz
Aug 17 21:54:39.075: INFO: Got endpoints: latency-svc-2drzz [5.196677999s]
Aug 17 21:54:39.112: INFO: Created: latency-svc-2rdtm
Aug 17 21:54:39.135: INFO: Got endpoints: latency-svc-2rdtm [4.870096851s]
Aug 17 21:54:39.153: INFO: Created: latency-svc-2fwvr
Aug 17 21:54:39.215: INFO: Got endpoints: latency-svc-2fwvr [3.783890921s]
Aug 17 21:54:39.218: INFO: Created: latency-svc-kktvx
Aug 17 21:54:39.226: INFO: Got endpoints: latency-svc-kktvx [3.757495971s]
Aug 17 21:54:39.250: INFO: Created: latency-svc-6gt2m
Aug 17 21:54:39.262: INFO: Got endpoints: latency-svc-6gt2m [3.48400158s]
Aug 17 21:54:39.293: INFO: Created: latency-svc-cqz2m
Aug 17 21:54:39.305: INFO: Got endpoints: latency-svc-cqz2m [3.315668383s]
Aug 17 21:54:39.393: INFO: Created: latency-svc-2tggn
Aug 17 21:54:39.444: INFO: Got endpoints: latency-svc-2tggn [3.144335989s]
Aug 17 21:54:39.467: INFO: Created: latency-svc-8qsml
Aug 17 21:54:39.491: INFO: Got endpoints: latency-svc-8qsml [2.495017645s]
Aug 17 21:54:39.544: INFO: Created: latency-svc-wpp7t
Aug 17 21:54:39.559: INFO: Got endpoints: latency-svc-wpp7t [2.157369884s]
Aug 17 21:54:39.622: INFO: Created: latency-svc-7lb2x
Aug 17 21:54:39.693: INFO: Got endpoints: latency-svc-7lb2x [1.356427851s]
Aug 17 21:54:39.762: INFO: Created: latency-svc-xwjml
Aug 17 21:54:39.773: INFO: Got endpoints: latency-svc-xwjml [1.210635772s]
Aug 17 21:54:39.837: INFO: Created: latency-svc-p28d8
Aug 17 21:54:39.840: INFO: Got endpoints: latency-svc-p28d8 [1.063833875s]
Aug 17 21:54:39.875: INFO: Created: latency-svc-72bcq
Aug 17 21:54:39.895: INFO: Got endpoints: latency-svc-72bcq [1.09581438s]
Aug 17 21:54:40.035: INFO: Created: latency-svc-2mnmg
Aug 17 21:54:40.069: INFO: Got endpoints: latency-svc-2mnmg [1.157022815s]
Aug 17 21:54:40.133: INFO: Created: latency-svc-k2rfr
Aug 17 21:54:40.232: INFO: Got endpoints: latency-svc-k2rfr [1.2347004s]
Aug 17 21:54:40.253: INFO: Created: latency-svc-f6hbh
Aug 17 21:54:40.273: INFO: Got endpoints: latency-svc-f6hbh [1.19792114s]
Aug 17 21:54:40.319: INFO: Created: latency-svc-sk7cx
Aug 17 21:54:40.400: INFO: Got endpoints: latency-svc-sk7cx [1.26438055s]
Aug 17 21:54:40.405: INFO: Created: latency-svc-bdfd5
Aug 17 21:54:40.414: INFO: Got endpoints: latency-svc-bdfd5 [1.199547008s]
Aug 17 21:54:40.458: INFO: Created: latency-svc-p8r76
Aug 17 21:54:40.477: INFO: Got endpoints: latency-svc-p8r76 [1.251239366s]
Aug 17 21:54:40.609: INFO: Created: latency-svc-nln9w
Aug 17 21:54:40.622: INFO: Got endpoints: latency-svc-nln9w [1.359230843s]
Aug 17 21:54:40.667: INFO: Created: latency-svc-7924n
Aug 17 21:54:40.771: INFO: Got endpoints: latency-svc-7924n [1.465800418s]
Aug 17 21:54:40.783: INFO: Created: latency-svc-5926f
Aug 17 21:54:40.825: INFO: Created: latency-svc-zvvd5
Aug 17 21:54:40.827: INFO: Got endpoints: latency-svc-5926f [1.383361616s]
Aug 17 21:54:40.844: INFO: Got endpoints: latency-svc-zvvd5 [1.353363951s]
Aug 17 21:54:40.939: INFO: Created: latency-svc-9g6vn
Aug 17 21:54:40.959: INFO: Got endpoints: latency-svc-9g6vn [1.399738201s]
Aug 17 21:54:41.022: INFO: Created: latency-svc-vxtr5
Aug 17 21:54:42.126: INFO: Got endpoints: latency-svc-vxtr5 [2.433262972s]
Aug 17 21:54:42.207: INFO: Created: latency-svc-2v72q
Aug 17 21:54:42.401: INFO: Got endpoints: latency-svc-2v72q [2.627988484s]
Aug 17 21:54:42.683: INFO: Created: latency-svc-zn77t
Aug 17 21:54:42.689: INFO: Got endpoints: latency-svc-zn77t [2.84815283s]
Aug 17 21:54:43.601: INFO: Created: latency-svc-xfjp7
Aug 17 21:54:43.827: INFO: Got endpoints: latency-svc-xfjp7 [3.932025749s]
Aug 17 21:54:44.423: INFO: Created: latency-svc-xbbhw
Aug 17 21:54:44.765: INFO: Got endpoints: latency-svc-xbbhw [4.696598336s]
Aug 17 21:54:45.331: INFO: Created: latency-svc-sjnc9
Aug 17 21:54:45.820: INFO: Got endpoints: latency-svc-sjnc9 [5.588667243s]
Aug 17 21:54:45.826: INFO: Created: latency-svc-vf9hh
Aug 17 21:54:45.884: INFO: Got endpoints: latency-svc-vf9hh [5.610556398s]
Aug 17 21:54:46.278: INFO: Created: latency-svc-bw85t
Aug 17 21:54:46.592: INFO: Got endpoints: latency-svc-bw85t [6.191299404s]
Aug 17 21:54:46.631: INFO: Created: latency-svc-2fh68
Aug 17 21:54:47.143: INFO: Created: latency-svc-24nmc
Aug 17 21:54:47.143: INFO: Got endpoints: latency-svc-2fh68 [6.728846967s]
Aug 17 21:54:47.214: INFO: Got endpoints: latency-svc-24nmc [6.736859321s]
Aug 17 21:54:47.941: INFO: Created: latency-svc-d4dfx
Aug 17 21:54:48.125: INFO: Got endpoints: latency-svc-d4dfx [7.502879903s]
Aug 17 21:54:48.335: INFO: Created: latency-svc-kjpjz
Aug 17 21:54:48.346: INFO: Got endpoints: latency-svc-kjpjz [7.575581354s]
Aug 17 21:54:48.582: INFO: Created: latency-svc-wbct2
Aug 17 21:54:48.598: INFO: Got endpoints: latency-svc-wbct2 [7.770832015s]
Aug 17 21:54:48.638: INFO: Created: latency-svc-fg9rl
Aug 17 21:54:48.672: INFO: Got endpoints: latency-svc-fg9rl [7.827357155s]
Aug 17 21:54:48.765: INFO: Created: latency-svc-xh5m8
Aug 17 21:54:48.797: INFO: Got endpoints: latency-svc-xh5m8 [7.837737155s]
Aug 17 21:54:49.099: INFO: Created: latency-svc-4s22x
Aug 17 21:54:49.778: INFO: Got endpoints: latency-svc-4s22x [7.651187683s]
Aug 17 21:54:49.851: INFO: Created: latency-svc-xwk58
Aug 17 21:54:50.072: INFO: Got endpoints: latency-svc-xwk58 [7.670712937s]
Aug 17 21:54:50.075: INFO: Created: latency-svc-hznl2
Aug 17 21:54:50.413: INFO: Got endpoints: latency-svc-hznl2 [7.723937922s]
Aug 17 21:54:50.417: INFO: Created: latency-svc-tdnnv
Aug 17 21:54:50.609: INFO: Got endpoints: latency-svc-tdnnv [6.782348977s]
Aug 17 21:54:51.003: INFO: Created: latency-svc-7r29k
Aug 17 21:54:51.379: INFO: Got endpoints: latency-svc-7r29k [6.613228456s]
Aug 17 21:54:51.616: INFO: Created: latency-svc-6jg9j
Aug 17 21:54:51.656: INFO: Got endpoints: latency-svc-6jg9j [5.834799488s]
Aug 17 21:54:51.831: INFO: Created: latency-svc-z2pcm
Aug 17 21:54:51.902: INFO: Got endpoints: latency-svc-z2pcm [6.018023801s]
Aug 17 21:54:52.275: INFO: Created: latency-svc-c27vh
Aug 17 21:54:52.291: INFO: Got endpoints: latency-svc-c27vh [5.699164453s]
Aug 17 21:54:52.760: INFO: Created: latency-svc-6j86x
Aug 17 21:54:52.765: INFO: Got endpoints: latency-svc-6j86x [5.621226714s]
Aug 17 21:54:52.953: INFO: Created: latency-svc-n7nz8
Aug 17 21:54:52.968: INFO: Got endpoints: latency-svc-n7nz8 [5.7538508s]
Aug 17 21:54:53.118: INFO: Created: latency-svc-mng54
Aug 17 21:54:53.121: INFO: Got endpoints: latency-svc-mng54 [4.996148224s]
Aug 17 21:54:53.585: INFO: Created: latency-svc-7jbx8
Aug 17 21:54:53.664: INFO: Got endpoints: latency-svc-7jbx8 [5.317492765s]
Aug 17 21:54:53.878: INFO: Created: latency-svc-h5zr6
Aug 17 21:54:54.042: INFO: Got endpoints: latency-svc-h5zr6 [5.443692546s]
Aug 17 21:54:54.044: INFO: Created: latency-svc-xjrlr
Aug 17 21:54:54.093: INFO: Got endpoints: latency-svc-xjrlr [5.420665169s]
Aug 17 21:54:54.528: INFO: Created: latency-svc-t6f4d
Aug 17 21:54:54.718: INFO: Got endpoints: latency-svc-t6f4d [5.921132404s]
Aug 17 21:54:55.083: INFO: Created: latency-svc-xzqf9
Aug 17 21:54:55.340: INFO: Got endpoints: latency-svc-xzqf9 [5.562220825s]
Aug 17 21:54:55.824: INFO: Created: latency-svc-5hlwb
Aug 17 21:54:56.041: INFO: Got endpoints: latency-svc-5hlwb [5.969091055s]
Aug 17 21:54:56.245: INFO: Created: latency-svc-m2h7w
Aug 17 21:54:56.862: INFO: Got endpoints: latency-svc-m2h7w [6.449321836s]
Aug 17 21:54:56.862: INFO: Created: latency-svc-9pw7t
Aug 17 21:54:57.108: INFO: Got endpoints: latency-svc-9pw7t [6.498374967s]
Aug 17 21:54:57.110: INFO: Created: latency-svc-hc69j
Aug 17 21:54:57.187: INFO: Got endpoints: latency-svc-hc69j [5.808020023s]
Aug 17 21:54:57.556: INFO: Created: latency-svc-vpsml
Aug 17 21:54:57.826: INFO: Got endpoints: latency-svc-vpsml [6.169987543s]
Aug 17 21:54:58.253: INFO: Created: latency-svc-jwbw8
Aug 17 21:54:58.510: INFO: Got endpoints: latency-svc-jwbw8 [6.606848868s]
Aug 17 21:54:59.245: INFO: Created: latency-svc-k99vj
Aug 17 21:54:59.245: INFO: Created: latency-svc-l7mph
Aug 17 21:54:59.251: INFO: Got endpoints: latency-svc-k99vj [6.485552028s]
Aug 17 21:54:59.544: INFO: Got endpoints: latency-svc-l7mph [7.252882941s]
Aug 17 21:54:59.864: INFO: Created: latency-svc-nfp2v
Aug 17 21:54:59.921: INFO: Got endpoints: latency-svc-nfp2v [6.952282728s]
Aug 17 21:55:00.574: INFO: Created: latency-svc-rf2dr
Aug 17 21:55:00.765: INFO: Got endpoints: latency-svc-rf2dr [7.644189876s]
Aug 17 21:55:01.151: INFO: Created: latency-svc-p66dk
Aug 17 21:55:01.179: INFO: Got endpoints: latency-svc-p66dk [7.51456296s]
Aug 17 21:55:02.045: INFO: Created: latency-svc-ngjfj
Aug 17 21:55:02.439: INFO: Got endpoints: latency-svc-ngjfj [8.396539576s]
Aug 17 21:55:03.150: INFO: Created: latency-svc-5zjp4
Aug 17 21:55:03.587: INFO: Got endpoints: latency-svc-5zjp4 [9.493723073s]
Aug 17 21:55:03.934: INFO: Created: latency-svc-98fh5
Aug 17 21:55:03.939: INFO: Got endpoints: latency-svc-98fh5 [9.22057479s]
Aug 17 21:55:04.875: INFO: Created: latency-svc-klh7x
Aug 17 21:55:04.906: INFO: Got endpoints: latency-svc-klh7x [9.565279688s]
Aug 17 21:55:05.064: INFO: Created: latency-svc-d9zqt
Aug 17 21:55:05.069: INFO: Got endpoints: latency-svc-d9zqt [9.027521691s]
Aug 17 21:55:05.323: INFO: Created: latency-svc-h6p68
Aug 17 21:55:05.769: INFO: Created: latency-svc-8p7h7
Aug 17 21:55:05.770: INFO: Got endpoints: latency-svc-h6p68 [8.907605912s]
Aug 17 21:55:06.136: INFO: Got endpoints: latency-svc-8p7h7 [9.027201893s]
Aug 17 21:55:06.394: INFO: Created: latency-svc-nwq58
Aug 17 21:55:06.701: INFO: Got endpoints: latency-svc-nwq58 [9.513311066s]
Aug 17 21:55:07.305: INFO: Created: latency-svc-ndttt
Aug 17 21:55:07.309: INFO: Got endpoints: latency-svc-ndttt [9.483172219s]
Aug 17 21:55:07.529: INFO: Created: latency-svc-mqv5b
Aug 17 21:55:07.591: INFO: Got endpoints: latency-svc-mqv5b [9.081155023s]
Aug 17 21:55:07.964: INFO: Created: latency-svc-rnbtq
Aug 17 21:55:08.162: INFO: Got endpoints: latency-svc-rnbtq [8.911406083s]
Aug 17 21:55:08.376: INFO: Created: latency-svc-th6b5
Aug 17 21:55:08.390: INFO: Got endpoints: latency-svc-th6b5 [8.845813942s]
Aug 17 21:55:08.450: INFO: Created: latency-svc-jjxsd
Aug 17 21:55:08.610: INFO: Got endpoints: latency-svc-jjxsd [8.689456318s]
Aug 17 21:55:08.636: INFO: Created: latency-svc-lvvvb
Aug 17 21:55:08.708: INFO: Got endpoints: latency-svc-lvvvb [7.943001363s]
Aug 17 21:55:08.843: INFO: Created: latency-svc-56wxm
Aug 17 21:55:08.847: INFO: Got endpoints: latency-svc-56wxm [7.667493828s]
Aug 17 21:55:08.926: INFO: Created: latency-svc-skqq4
Aug 17 21:55:08.940: INFO: Got endpoints: latency-svc-skqq4 [6.500405805s]
Aug 17 21:55:09.086: INFO: Created: latency-svc-mmf4k
Aug 17 21:55:09.228: INFO: Got endpoints: latency-svc-mmf4k [5.640889021s]
Aug 17 21:55:10.341: INFO: Created: latency-svc-bb42q
Aug 17 21:55:10.396: INFO: Got endpoints: latency-svc-bb42q [6.456728724s]
Aug 17 21:55:10.611: INFO: Created: latency-svc-4dffq
Aug 17 21:55:10.687: INFO: Got endpoints: latency-svc-4dffq [5.780510298s]
Aug 17 21:55:11.023: INFO: Created: latency-svc-4l6r4
Aug 17 21:55:11.505: INFO: Got endpoints: latency-svc-4l6r4 [6.435536198s]
Aug 17 21:55:11.511: INFO: Created: latency-svc-9g9lg
Aug 17 21:55:11.711: INFO: Got endpoints: latency-svc-9g9lg [5.940849994s]
Aug 17 21:55:11.722: INFO: Created: latency-svc-zk27b
Aug 17 21:55:11.743: INFO: Got endpoints: latency-svc-zk27b [5.607449974s]
Aug 17 21:55:11.878: INFO: Created: latency-svc-zl26l
Aug 17 21:55:11.882: INFO: Got endpoints: latency-svc-zl26l [5.180496051s]
Aug 17 21:55:11.920: INFO: Created: latency-svc-zzhsr
Aug 17 21:55:11.933: INFO: Got endpoints: latency-svc-zzhsr [4.623581638s]
Aug 17 21:55:11.950: INFO: Created: latency-svc-xm7gx
Aug 17 21:55:11.964: INFO: Got endpoints: latency-svc-xm7gx [4.372397611s]
Aug 17 21:55:12.070: INFO: Created: latency-svc-jr4lm
Aug 17 21:55:12.143: INFO: Created: latency-svc-62cds
Aug 17 21:55:12.144: INFO: Got endpoints: latency-svc-jr4lm [3.981172827s]
Aug 17 21:55:12.157: INFO: Got endpoints: latency-svc-62cds [3.766562176s]
Aug 17 21:55:12.215: INFO: Created: latency-svc-lqg96
Aug 17 21:55:12.233: INFO: Got endpoints: latency-svc-lqg96 [3.622696257s]
Aug 17 21:55:12.287: INFO: Created: latency-svc-576r9
Aug 17 21:55:12.378: INFO: Got endpoints: latency-svc-576r9 [3.669345701s]
Aug 17 21:55:12.379: INFO: Created: latency-svc-cmztk
Aug 17 21:55:12.409: INFO: Got endpoints: latency-svc-cmztk [3.561908035s]
Aug 17 21:55:12.457: INFO: Created: latency-svc-pfk8m
Aug 17 21:55:12.538: INFO: Got endpoints: latency-svc-pfk8m [3.598683902s]
Aug 17 21:55:12.558: INFO: Created: latency-svc-nqkft
Aug 17 21:55:12.589: INFO: Got endpoints: latency-svc-nqkft [3.361184834s]
Aug 17 21:55:12.753: INFO: Created: latency-svc-btfdn
Aug 17 21:55:12.757: INFO: Got endpoints: latency-svc-btfdn [2.360698934s]
Aug 17 21:55:12.786: INFO: Created: latency-svc-h9knc
Aug 17 21:55:12.834: INFO: Got endpoints: latency-svc-h9knc [2.147038133s]
Aug 17 21:55:12.891: INFO: Created: latency-svc-2sm46
Aug 17 21:55:12.905: INFO: Got endpoints: latency-svc-2sm46 [1.400587627s]
Aug 17 21:55:12.936: INFO: Created: latency-svc-gb8v7
Aug 17 21:55:12.949: INFO: Got endpoints: latency-svc-gb8v7 [1.2377022s]
Aug 17 21:55:12.966: INFO: Created: latency-svc-p8fks
Aug 17 21:55:12.979: INFO: Got endpoints: latency-svc-p8fks [1.235329292s]
Aug 17 21:55:13.083: INFO: Created: latency-svc-gjp7z
Aug 17 21:55:13.086: INFO: Got endpoints: latency-svc-gjp7z [1.204682463s]
Aug 17 21:55:13.118: INFO: Created: latency-svc-fjxc9
Aug 17 21:55:13.130: INFO: Got endpoints: latency-svc-fjxc9 [1.197027516s]
Aug 17 21:55:13.148: INFO: Created: latency-svc-jrlkx
Aug 17 21:55:13.221: INFO: Got endpoints: latency-svc-jrlkx [1.256918771s]
Aug 17 21:55:13.268: INFO: Created: latency-svc-zpc69
Aug 17 21:55:13.293: INFO: Got endpoints: latency-svc-zpc69 [1.148453221s]
Aug 17 21:55:13.369: INFO: Created: latency-svc-tvnb4
Aug 17 21:55:13.377: INFO: Got endpoints: latency-svc-tvnb4 [1.219770859s]
Aug 17 21:55:13.651: INFO: Created: latency-svc-42nt9
Aug 17 21:55:13.689: INFO: Got endpoints: latency-svc-42nt9 [1.455205531s]
Aug 17 21:55:13.896: INFO: Created: latency-svc-bcx4j
Aug 17 21:55:13.929: INFO: Got endpoints: latency-svc-bcx4j [1.550917733s]
Aug 17 21:55:14.049: INFO: Created: latency-svc-cxwkr
Aug 17 21:55:14.074: INFO: Got endpoints: latency-svc-cxwkr [1.664820153s]
Aug 17 21:55:14.935: INFO: Created: latency-svc-b9gkc
Aug 17 21:55:15.005: INFO: Got endpoints: latency-svc-b9gkc [2.46635172s]
Aug 17 21:55:15.343: INFO: Created: latency-svc-4fnmv
Aug 17 21:55:15.483: INFO: Got endpoints: latency-svc-4fnmv [2.893925368s]
Aug 17 21:55:15.541: INFO: Created: latency-svc-49txh
Aug 17 21:55:15.566: INFO: Got endpoints: latency-svc-49txh [2.809134666s]
Aug 17 21:55:15.568: INFO: Latencies: [287.597171ms 505.688894ms 539.146387ms 863.173518ms 1.063833875s 1.09581438s 1.148453221s 1.157022815s 1.162940447s 1.197027516s 1.19792114s 1.199547008s 1.204682463s 1.210635772s 1.219770859s 1.2347004s 1.235329292s 1.2377022s 1.251239366s 1.256918771s 1.26438055s 1.353363951s 1.356427851s 1.359230843s 1.383361616s 1.399738201s 1.400587627s 1.455205531s 1.465800418s 1.550917733s 1.664820153s 1.792075388s 1.881268853s 2.064198232s 2.072441956s 2.147038133s 2.157369884s 2.184925489s 2.22885171s 2.286045711s 2.342686349s 2.360698934s 2.361755968s 2.410564764s 2.433262972s 2.46635172s 2.495017645s 2.513210264s 2.627988484s 2.684704808s 2.809134666s 2.84815283s 2.893925368s 2.932967656s 3.144335989s 3.209537273s 3.261624308s 3.315668383s 3.361184834s 3.48400158s 3.48539074s 3.561908035s 3.598683902s 3.622696257s 3.669345701s 3.69394725s 3.757495971s 3.766562176s 3.783890921s 3.830408254s 3.898196971s 3.932025749s 3.966471638s 3.981172827s 4.011641268s 4.098220627s 4.269552946s 4.355134403s 4.372397611s 4.623581638s 4.677182825s 4.696598336s 4.727187932s 4.729181887s 4.842359701s 4.870096851s 4.891323181s 4.996148224s 5.056790338s 5.180496051s 5.196677999s 5.242991581s 5.257352753s 5.317492765s 5.420665169s 5.443692546s 5.562220825s 5.588667243s 5.607449974s 5.610556398s 5.616257959s 5.621226714s 5.640889021s 5.699164453s 5.699249147s 5.752123656s 5.7538508s 5.780510298s 5.808020023s 5.834799488s 5.921132404s 5.932551747s 5.940849994s 5.952149594s 5.969091055s 6.018023801s 6.104397322s 6.133939867s 6.151826033s 6.154671153s 6.169987543s 6.1782148s 6.191299404s 6.206264088s 6.253407106s 6.254403667s 6.311721555s 6.322181112s 6.367123894s 6.435536198s 6.449321836s 6.454171798s 6.456728724s 6.459790348s 6.485552028s 6.498374967s 6.500405805s 6.565935069s 6.606848868s 6.613228456s 6.678047842s 6.711503982s 6.728846967s 6.736859321s 6.782348977s 6.887362152s 6.952282728s 6.987714902s 7.04523309s 7.059476157s 7.191866789s 7.236632705s 7.252882941s 7.263938352s 7.368759856s 7.38289731s 7.398243281s 7.400072934s 7.502879903s 7.51456296s 7.575581354s 7.644189876s 7.651187683s 7.667493828s 7.670712937s 7.723937922s 7.744565789s 7.770832015s 7.800938783s 7.827357155s 7.837737155s 7.880859913s 7.936537816s 7.943001363s 8.220285918s 8.350669539s 8.396539576s 8.505335232s 8.517413416s 8.519665321s 8.53300149s 8.599605717s 8.643033523s 8.689456318s 8.726563619s 8.818133992s 8.845813942s 8.846201194s 8.907605912s 8.911406083s 8.927716825s 9.027201893s 9.027521691s 9.081155023s 9.22057479s 9.282678214s 9.483172219s 9.493723073s 9.513311066s 9.565279688s]
Aug 17 21:55:15.569: INFO: 50 %ile: 5.616257959s
Aug 17 21:55:15.569: INFO: 90 %ile: 8.53300149s
Aug 17 21:55:15.569: INFO: 99 %ile: 9.513311066s
Aug 17 21:55:15.569: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:55:15.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9819" for this suite.

• [SLOW TEST:78.273 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":34,"skipped":509,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:55:15.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 21:55:20.560: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 21:55:22.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298120, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298120, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298121, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298120, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 21:55:24.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298120, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298120, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298121, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298120, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 21:55:26.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298120, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298120, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298121, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298120, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 21:55:30.006: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:55:30.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4011" for this suite.
STEP: Destroying namespace "webhook-4011-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.485 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":35,"skipped":534,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:55:32.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:55:44.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8898" for this suite.

• [SLOW TEST:13.017 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":36,"skipped":534,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:55:45.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-7fe869cb-f00d-4579-bc1a-444b4619a242 in namespace container-probe-8580
Aug 17 21:55:54.830: INFO: Started pod liveness-7fe869cb-f00d-4579-bc1a-444b4619a242 in namespace container-probe-8580
STEP: checking the pod's current state and verifying that restartCount is present
Aug 17 21:55:54.993: INFO: Initial restart count of pod liveness-7fe869cb-f00d-4579-bc1a-444b4619a242 is 0
Aug 17 21:56:12.672: INFO: Restart count of pod container-probe-8580/liveness-7fe869cb-f00d-4579-bc1a-444b4619a242 is now 1 (17.678457031s elapsed)
Aug 17 21:56:31.156: INFO: Restart count of pod container-probe-8580/liveness-7fe869cb-f00d-4579-bc1a-444b4619a242 is now 2 (36.16244292s elapsed)
Aug 17 21:56:52.767: INFO: Restart count of pod container-probe-8580/liveness-7fe869cb-f00d-4579-bc1a-444b4619a242 is now 3 (57.7741372s elapsed)
Aug 17 21:57:13.131: INFO: Restart count of pod container-probe-8580/liveness-7fe869cb-f00d-4579-bc1a-444b4619a242 is now 4 (1m18.13749573s elapsed)
Aug 17 21:58:11.580: INFO: Restart count of pod container-probe-8580/liveness-7fe869cb-f00d-4579-bc1a-444b4619a242 is now 5 (2m16.587300342s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:58:11.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8580" for this suite.

• [SLOW TEST:146.912 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":542,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:58:12.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:58:46.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6025" for this suite.

• [SLOW TEST:34.196 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":38,"skipped":563,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:58:46.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4127.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4127.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4127.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 17 21:58:58.759: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:58:58.765: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:58:58.769: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:58:58.774: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:58:58.786: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:58:58.790: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:58:58.793: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:58:58.796: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:58:58.802: INFO: Lookups using dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local]

Aug 17 21:59:03.808: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:03.813: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:03.816: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:03.821: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:03.832: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:03.836: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:03.839: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:03.842: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:03.850: INFO: Lookups using dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local]

Aug 17 21:59:08.936: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:09.162: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:09.436: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:09.441: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:09.451: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:09.455: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:09.459: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:09.463: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:09.775: INFO: Lookups using dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local]

Aug 17 21:59:13.814: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:13.831: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:13.840: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:13.843: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:13.852: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:13.857: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:14.176: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:14.438: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:14.493: INFO: Lookups using dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local]

Aug 17 21:59:18.936: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:18.941: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:18.946: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:18.949: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:18.961: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:18.965: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:18.970: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:18.973: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:18.980: INFO: Lookups using dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local]

Aug 17 21:59:23.999: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:24.005: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:24.010: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:24.014: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:24.305: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:24.395: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:24.498: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:24.502: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local from pod dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a: the server could not find the requested resource (get pods dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a)
Aug 17 21:59:24.509: INFO: Lookups using dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4127.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4127.svc.cluster.local jessie_udp@dns-test-service-2.dns-4127.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4127.svc.cluster.local]

Aug 17 21:59:29.574: INFO: DNS probes using dns-4127/dns-test-481bc6b0-3848-43d0-ace8-a75b916a8f4a succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:59:31.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4127" for this suite.

• [SLOW TEST:46.373 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":39,"skipped":587,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:59:32.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-eb543bc4-de77-4026-9920-56186f9262e9
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-eb543bc4-de77-4026-9920-56186f9262e9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:59:44.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3715" for this suite.

• [SLOW TEST:12.259 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":614,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:59:44.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 17 21:59:44.929: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 17 21:59:44.947: INFO: Waiting for terminating namespaces to be deleted...
Aug 17 21:59:44.952: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 17 21:59:44.975: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 21:59:44.976: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 17 21:59:44.976: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 21:59:44.976: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 17 21:59:44.976: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 17 21:59:44.994: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 21:59:44.994: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 17 21:59:44.994: INFO: pod-projected-configmaps-de6cc1bc-dd25-41c4-8181-e683bd0ddd3d from projected-3715 started at 2020-08-17 21:59:36 +0000 UTC (1 container statuses recorded)
Aug 17 21:59:44.994: INFO: 	Container projected-configmap-volume-test ready: true, restart count 0
Aug 17 21:59:44.994: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 21:59:44.994: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Aug 17 21:59:45.086: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2
Aug 17 21:59:45.087: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker
Aug 17 21:59:45.087: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2
Aug 17 21:59:45.087: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker
Aug 17 21:59:45.087: INFO: Pod pod-projected-configmaps-de6cc1bc-dd25-41c4-8181-e683bd0ddd3d requesting resource cpu=0m on Node jerma-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Aug 17 21:59:45.087: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
Aug 17 21:59:45.118: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-518be0eb-9c4f-4df0-8dc1-21f6da0f012f.162c2d20703827a6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1514/filler-pod-518be0eb-9c4f-4df0-8dc1-21f6da0f012f to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-518be0eb-9c4f-4df0-8dc1-21f6da0f012f.162c2d21402519ed], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-518be0eb-9c4f-4df0-8dc1-21f6da0f012f.162c2d21e3d71b80], Reason = [Created], Message = [Created container filler-pod-518be0eb-9c4f-4df0-8dc1-21f6da0f012f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-518be0eb-9c4f-4df0-8dc1-21f6da0f012f.162c2d21f3f3e284], Reason = [Started], Message = [Started container filler-pod-518be0eb-9c4f-4df0-8dc1-21f6da0f012f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c945ad3f-9d32-4ddb-bdd8-8fb9cf6a4c11.162c2d206e76d01f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1514/filler-pod-c945ad3f-9d32-4ddb-bdd8-8fb9cf6a4c11 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c945ad3f-9d32-4ddb-bdd8-8fb9cf6a4c11.162c2d210542aa94], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c945ad3f-9d32-4ddb-bdd8-8fb9cf6a4c11.162c2d21d328b4fa], Reason = [Created], Message = [Created container filler-pod-c945ad3f-9d32-4ddb-bdd8-8fb9cf6a4c11]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c945ad3f-9d32-4ddb-bdd8-8fb9cf6a4c11.162c2d21e3d732e8], Reason = [Started], Message = [Started container filler-pod-c945ad3f-9d32-4ddb-bdd8-8fb9cf6a4c11]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162c2d2250304b17], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 21:59:54.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1514" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:9.463 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":41,"skipped":682,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 21:59:54.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 21:59:54.653: INFO: Create a RollingUpdate DaemonSet
Aug 17 21:59:54.660: INFO: Check that daemon pods launch on every node of the cluster
Aug 17 21:59:54.681: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 21:59:54.731: INFO: Number of nodes with available pods: 0
Aug 17 21:59:54.731: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 21:59:55.739: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 21:59:55.745: INFO: Number of nodes with available pods: 0
Aug 17 21:59:55.745: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 21:59:56.767: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 21:59:57.177: INFO: Number of nodes with available pods: 0
Aug 17 21:59:57.177: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 21:59:58.365: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 21:59:58.391: INFO: Number of nodes with available pods: 0
Aug 17 21:59:58.391: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 21:59:58.813: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 21:59:59.021: INFO: Number of nodes with available pods: 0
Aug 17 21:59:59.021: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 21:59:59.837: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 21:59:59.844: INFO: Number of nodes with available pods: 0
Aug 17 21:59:59.844: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 22:00:00.865: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:00:01.190: INFO: Number of nodes with available pods: 1
Aug 17 22:00:01.190: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 22:00:01.791: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:00:01.822: INFO: Number of nodes with available pods: 2
Aug 17 22:00:01.822: INFO: Number of running nodes: 2, number of available pods: 2
Aug 17 22:00:01.822: INFO: Update the DaemonSet to trigger a rollout
Aug 17 22:00:01.876: INFO: Updating DaemonSet daemon-set
Aug 17 22:00:13.222: INFO: Roll back the DaemonSet before rollout is complete
Aug 17 22:00:13.905: INFO: Updating DaemonSet daemon-set
Aug 17 22:00:13.905: INFO: Make sure DaemonSet rollback is complete
Aug 17 22:00:14.471: INFO: Wrong image for pod: daemon-set-gggcn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 17 22:00:14.471: INFO: Pod daemon-set-gggcn is not available
Aug 17 22:00:14.561: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:00:15.960: INFO: Wrong image for pod: daemon-set-gggcn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 17 22:00:15.961: INFO: Pod daemon-set-gggcn is not available
Aug 17 22:00:16.450: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:00:16.734: INFO: Wrong image for pod: daemon-set-gggcn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 17 22:00:16.734: INFO: Pod daemon-set-gggcn is not available
Aug 17 22:00:16.770: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:00:17.567: INFO: Pod daemon-set-hs4bp is not available
Aug 17 22:00:17.674: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1955, will wait for the garbage collector to delete the pods
Aug 17 22:00:18.551: INFO: Deleting DaemonSet.extensions daemon-set took: 7.454246ms
Aug 17 22:00:20.352: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.800754992s
Aug 17 22:00:31.658: INFO: Number of nodes with available pods: 0
Aug 17 22:00:31.659: INFO: Number of running nodes: 0, number of available pods: 0
Aug 17 22:00:31.663: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1955/daemonsets","resourceVersion":"878499"},"items":null}

Aug 17 22:00:31.667: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1955/pods","resourceVersion":"878499"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:00:31.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1955" for this suite.

• [SLOW TEST:37.353 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":42,"skipped":700,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:00:31.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:00:32.275: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:00:33.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5181" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":43,"skipped":707,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:00:33.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:00:33.505: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 17 22:00:33.515: INFO: Number of nodes with available pods: 0
Aug 17 22:00:33.515: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 17 22:00:33.612: INFO: Number of nodes with available pods: 0
Aug 17 22:00:33.612: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:34.617: INFO: Number of nodes with available pods: 0
Aug 17 22:00:34.617: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:35.620: INFO: Number of nodes with available pods: 0
Aug 17 22:00:35.620: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:36.619: INFO: Number of nodes with available pods: 0
Aug 17 22:00:36.619: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:37.701: INFO: Number of nodes with available pods: 1
Aug 17 22:00:37.701: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 17 22:00:37.864: INFO: Number of nodes with available pods: 1
Aug 17 22:00:37.864: INFO: Number of running nodes: 0, number of available pods: 1
Aug 17 22:00:38.948: INFO: Number of nodes with available pods: 0
Aug 17 22:00:38.948: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 17 22:00:39.238: INFO: Number of nodes with available pods: 0
Aug 17 22:00:39.238: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:40.290: INFO: Number of nodes with available pods: 0
Aug 17 22:00:40.291: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:41.245: INFO: Number of nodes with available pods: 0
Aug 17 22:00:41.245: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:42.244: INFO: Number of nodes with available pods: 0
Aug 17 22:00:42.244: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:43.245: INFO: Number of nodes with available pods: 0
Aug 17 22:00:43.245: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:44.247: INFO: Number of nodes with available pods: 0
Aug 17 22:00:44.247: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:45.245: INFO: Number of nodes with available pods: 0
Aug 17 22:00:45.245: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:46.343: INFO: Number of nodes with available pods: 0
Aug 17 22:00:46.343: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:47.244: INFO: Number of nodes with available pods: 0
Aug 17 22:00:47.245: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:48.457: INFO: Number of nodes with available pods: 0
Aug 17 22:00:48.457: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:49.245: INFO: Number of nodes with available pods: 0
Aug 17 22:00:49.245: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:50.422: INFO: Number of nodes with available pods: 0
Aug 17 22:00:50.422: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:51.254: INFO: Number of nodes with available pods: 0
Aug 17 22:00:51.254: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:52.355: INFO: Number of nodes with available pods: 0
Aug 17 22:00:52.355: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:53.259: INFO: Number of nodes with available pods: 0
Aug 17 22:00:53.259: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:54.386: INFO: Number of nodes with available pods: 0
Aug 17 22:00:54.386: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:55.278: INFO: Number of nodes with available pods: 0
Aug 17 22:00:55.278: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:00:56.245: INFO: Number of nodes with available pods: 1
Aug 17 22:00:56.245: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-273, will wait for the garbage collector to delete the pods
Aug 17 22:00:56.319: INFO: Deleting DaemonSet.extensions daemon-set took: 9.721289ms
Aug 17 22:00:56.620: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.989566ms
Aug 17 22:01:11.925: INFO: Number of nodes with available pods: 0
Aug 17 22:01:11.926: INFO: Number of running nodes: 0, number of available pods: 0
Aug 17 22:01:11.930: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-273/daemonsets","resourceVersion":"878709"},"items":null}

Aug 17 22:01:11.995: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-273/pods","resourceVersion":"878709"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:01:12.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-273" for this suite.

• [SLOW TEST:38.651 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":44,"skipped":729,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:01:12.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-2799
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 17 22:01:12.162: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 17 22:01:42.344: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.247 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2799 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:01:42.345: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:01:42.443590       7 log.go:172] (0x4003264840) (0x4001c070e0) Create stream
I0817 22:01:42.443766       7 log.go:172] (0x4003264840) (0x4001c070e0) Stream added, broadcasting: 1
I0817 22:01:42.446986       7 log.go:172] (0x4003264840) Reply frame received for 1
I0817 22:01:42.447288       7 log.go:172] (0x4003264840) (0x40022f20a0) Create stream
I0817 22:01:42.447368       7 log.go:172] (0x4003264840) (0x40022f20a0) Stream added, broadcasting: 3
I0817 22:01:42.449012       7 log.go:172] (0x4003264840) Reply frame received for 3
I0817 22:01:42.449162       7 log.go:172] (0x4003264840) (0x40022f2140) Create stream
I0817 22:01:42.449229       7 log.go:172] (0x4003264840) (0x40022f2140) Stream added, broadcasting: 5
I0817 22:01:42.450615       7 log.go:172] (0x4003264840) Reply frame received for 5
I0817 22:01:43.517438       7 log.go:172] (0x4003264840) Data frame received for 5
I0817 22:01:43.517575       7 log.go:172] (0x40022f2140) (5) Data frame handling
I0817 22:01:43.517835       7 log.go:172] (0x4003264840) Data frame received for 3
I0817 22:01:43.517979       7 log.go:172] (0x40022f20a0) (3) Data frame handling
I0817 22:01:43.518075       7 log.go:172] (0x40022f20a0) (3) Data frame sent
I0817 22:01:43.518146       7 log.go:172] (0x4003264840) Data frame received for 3
I0817 22:01:43.518213       7 log.go:172] (0x40022f20a0) (3) Data frame handling
I0817 22:01:43.519866       7 log.go:172] (0x4003264840) Data frame received for 1
I0817 22:01:43.520016       7 log.go:172] (0x4001c070e0) (1) Data frame handling
I0817 22:01:43.520223       7 log.go:172] (0x4001c070e0) (1) Data frame sent
I0817 22:01:43.520396       7 log.go:172] (0x4003264840) (0x4001c070e0) Stream removed, broadcasting: 1
I0817 22:01:43.520567       7 log.go:172] (0x4003264840) Go away received
I0817 22:01:43.520970       7 log.go:172] (0x4003264840) (0x4001c070e0) Stream removed, broadcasting: 1
I0817 22:01:43.521060       7 log.go:172] (0x4003264840) (0x40022f20a0) Stream removed, broadcasting: 3
I0817 22:01:43.521150       7 log.go:172] (0x4003264840) (0x40022f2140) Stream removed, broadcasting: 5
Aug 17 22:01:43.521: INFO: Found all expected endpoints: [netserver-0]
Aug 17 22:01:43.526: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.53 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2799 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:01:43.526: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:01:43.589115       7 log.go:172] (0x4001fb8790) (0x400205c820) Create stream
I0817 22:01:43.589261       7 log.go:172] (0x4001fb8790) (0x400205c820) Stream added, broadcasting: 1
I0817 22:01:43.593097       7 log.go:172] (0x4001fb8790) Reply frame received for 1
I0817 22:01:43.593356       7 log.go:172] (0x4001fb8790) (0x4002f0c000) Create stream
I0817 22:01:43.593486       7 log.go:172] (0x4001fb8790) (0x4002f0c000) Stream added, broadcasting: 3
I0817 22:01:43.595386       7 log.go:172] (0x4001fb8790) Reply frame received for 3
I0817 22:01:43.595555       7 log.go:172] (0x4001fb8790) (0x400205c8c0) Create stream
I0817 22:01:43.595649       7 log.go:172] (0x4001fb8790) (0x400205c8c0) Stream added, broadcasting: 5
I0817 22:01:43.597418       7 log.go:172] (0x4001fb8790) Reply frame received for 5
I0817 22:01:44.662461       7 log.go:172] (0x4001fb8790) Data frame received for 5
I0817 22:01:44.662767       7 log.go:172] (0x400205c8c0) (5) Data frame handling
I0817 22:01:44.663035       7 log.go:172] (0x4001fb8790) Data frame received for 3
I0817 22:01:44.663201       7 log.go:172] (0x4002f0c000) (3) Data frame handling
I0817 22:01:44.663313       7 log.go:172] (0x4002f0c000) (3) Data frame sent
I0817 22:01:44.663392       7 log.go:172] (0x4001fb8790) Data frame received for 3
I0817 22:01:44.663501       7 log.go:172] (0x4002f0c000) (3) Data frame handling
I0817 22:01:44.664617       7 log.go:172] (0x4001fb8790) Data frame received for 1
I0817 22:01:44.664825       7 log.go:172] (0x400205c820) (1) Data frame handling
I0817 22:01:44.665003       7 log.go:172] (0x400205c820) (1) Data frame sent
I0817 22:01:44.665304       7 log.go:172] (0x4001fb8790) (0x400205c820) Stream removed, broadcasting: 1
I0817 22:01:44.665530       7 log.go:172] (0x4001fb8790) Go away received
I0817 22:01:44.666012       7 log.go:172] (0x4001fb8790) (0x400205c820) Stream removed, broadcasting: 1
I0817 22:01:44.666180       7 log.go:172] (0x4001fb8790) (0x4002f0c000) Stream removed, broadcasting: 3
I0817 22:01:44.666387       7 log.go:172] (0x4001fb8790) (0x400205c8c0) Stream removed, broadcasting: 5
Aug 17 22:01:44.666: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:01:44.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2799" for this suite.

• [SLOW TEST:32.668 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":735,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:01:44.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:01:48.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5687" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":754,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:01:48.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 17 22:01:56.111: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7863 pod-service-account-021817fb-c467-4c6d-8d2b-342bbac3c5ff -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 17 22:02:00.686: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7863 pod-service-account-021817fb-c467-4c6d-8d2b-342bbac3c5ff -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 17 22:02:02.147: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7863 pod-service-account-021817fb-c467-4c6d-8d2b-342bbac3c5ff -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:02:03.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7863" for this suite.

• [SLOW TEST:14.692 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":47,"skipped":767,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:02:03.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 22:02:03.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-051ebe6b-dd16-4bb5-bea3-1730ffdf2806" in namespace "downward-api-3468" to be "success or failure"
Aug 17 22:02:03.734: INFO: Pod "downwardapi-volume-051ebe6b-dd16-4bb5-bea3-1730ffdf2806": Phase="Pending", Reason="", readiness=false. Elapsed: 21.148644ms
Aug 17 22:02:05.740: INFO: Pod "downwardapi-volume-051ebe6b-dd16-4bb5-bea3-1730ffdf2806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027777591s
Aug 17 22:02:07.759: INFO: Pod "downwardapi-volume-051ebe6b-dd16-4bb5-bea3-1730ffdf2806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046578689s
STEP: Saw pod success
Aug 17 22:02:07.759: INFO: Pod "downwardapi-volume-051ebe6b-dd16-4bb5-bea3-1730ffdf2806" satisfied condition "success or failure"
Aug 17 22:02:07.762: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-051ebe6b-dd16-4bb5-bea3-1730ffdf2806 container client-container: 
STEP: delete the pod
Aug 17 22:02:07.791: INFO: Waiting for pod downwardapi-volume-051ebe6b-dd16-4bb5-bea3-1730ffdf2806 to disappear
Aug 17 22:02:07.805: INFO: Pod downwardapi-volume-051ebe6b-dd16-4bb5-bea3-1730ffdf2806 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:02:07.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3468" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":786,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:02:07.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:02:07.959: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:02:13.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-141" for this suite.

• [SLOW TEST:5.259 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":49,"skipped":816,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:02:13.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 17 22:02:13.245: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 17 22:02:13.371: INFO: Waiting for terminating namespaces to be deleted...
Aug 17 22:02:13.381: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 17 22:02:13.394: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 22:02:13.394: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 17 22:02:13.394: INFO: busybox-readonly-fs52659ad9-8bc2-413c-b047-0959b92a9db4 from kubelet-test-5687 started at 2020-08-17 22:01:44 +0000 UTC (1 container statuses recorded)
Aug 17 22:02:13.395: INFO: 	Container busybox-readonly-fs52659ad9-8bc2-413c-b047-0959b92a9db4 ready: true, restart count 0
Aug 17 22:02:13.395: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 22:02:13.395: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 17 22:02:13.395: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 17 22:02:13.414: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 22:02:13.414: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 17 22:02:13.414: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 22:02:13.414: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-97258f41-bb28-40e2-a18d-afd3da68b217 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-97258f41-bb28-40e2-a18d-afd3da68b217 off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-97258f41-bb28-40e2-a18d-afd3da68b217
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:02:22.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2159" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:9.032 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":50,"skipped":850,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:02:22.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2049 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2049;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2049 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2049;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2049.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2049.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2049.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2049.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2049.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2049.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2049.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2049.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2049.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2049.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2049.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 219.105.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.105.219_udp@PTR;check="$$(dig +tcp +noall +answer +search 219.105.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.105.219_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2049 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2049;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2049 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2049;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2049.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2049.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2049.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2049.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2049.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2049.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2049.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2049.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2049.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2049.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2049.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2049.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 219.105.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.105.219_udp@PTR;check="$$(dig +tcp +noall +answer +search 219.105.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.105.219_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 17 22:02:30.371: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.375: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.382: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.385: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.388: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.391: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.394: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.745: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.749: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.752: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.755: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.759: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.762: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.764: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.766: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:30.781: INFO: Lookups using dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Aug 17 22:02:35.788: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.793: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.797: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.801: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.805: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.808: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.812: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.816: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.844: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.848: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.852: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.856: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.860: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.864: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.868: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.872: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:35.893: INFO: Lookups using dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Aug 17 22:02:40.788: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.796: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.801: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.805: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.809: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.813: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.816: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.819: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.847: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.850: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.853: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.856: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.859: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.863: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.866: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.871: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:40.892: INFO: Lookups using dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Aug 17 22:02:45.817: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.822: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.826: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.834: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.838: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.841: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.844: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.871: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.874: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.878: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.882: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.886: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.891: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.895: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.898: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:45.916: INFO: Lookups using dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Aug 17 22:02:50.789: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.794: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.798: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.802: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.806: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.811: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.814: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.819: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.845: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.848: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.852: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.855: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.859: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.863: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.867: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.872: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:50.899: INFO: Lookups using dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Aug 17 22:02:55.789: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.794: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.799: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.803: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.807: INFO: Unable to read wheezy_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.811: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.815: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.819: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.856: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.860: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.865: INFO: Unable to read jessie_udp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.870: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049 from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.874: INFO: Unable to read jessie_udp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.879: INFO: Unable to read jessie_tcp@dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.883: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.887: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc from pod dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788: the server could not find the requested resource (get pods dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788)
Aug 17 22:02:55.920: INFO: Lookups using dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2049 wheezy_tcp@dns-test-service.dns-2049 wheezy_udp@dns-test-service.dns-2049.svc wheezy_tcp@dns-test-service.dns-2049.svc wheezy_udp@_http._tcp.dns-test-service.dns-2049.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2049.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2049 jessie_tcp@dns-test-service.dns-2049 jessie_udp@dns-test-service.dns-2049.svc jessie_tcp@dns-test-service.dns-2049.svc jessie_udp@_http._tcp.dns-test-service.dns-2049.svc jessie_tcp@_http._tcp.dns-test-service.dns-2049.svc]

Aug 17 22:03:00.901: INFO: DNS probes using dns-2049/dns-test-01c6c6c4-10c0-4b4e-b690-6764c8a57788 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:03:01.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2049" for this suite.

• [SLOW TEST:39.167 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":51,"skipped":855,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:03:01.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7187.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7187.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 17 22:03:09.655: INFO: DNS probes using dns-7187/dns-test-b23ecd7f-6e2c-486d-86df-28d55f340354 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:03:09.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7187" for this suite.

• [SLOW TEST:8.428 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":52,"skipped":902,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:03:09.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:03:10.978: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 17 22:03:13.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298590, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298590, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298591, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298590, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:03:16.474: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:03:16.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:03:19.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2829" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:10.365 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":53,"skipped":915,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:03:20.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:03:20.283: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 17 22:03:21.483: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:03:21.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8722" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":54,"skipped":925,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:03:21.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Aug 17 22:03:21.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 17 22:03:23.195: INFO: stderr: ""
Aug 17 22:03:23.195: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:03:23.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3034" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":55,"skipped":935,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:03:23.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 17 22:03:30.594: INFO: Successfully updated pod "annotationupdatec7951452-b1a8-4a60-b7d1-072ef09a84da"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:03:32.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2581" for this suite.

• [SLOW TEST:9.181 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":952,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:03:32.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 22:03:32.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce" in namespace "projected-4429" to be "success or failure"
Aug 17 22:03:32.966: INFO: Pod "downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce": Phase="Pending", Reason="", readiness=false. Elapsed: 35.116653ms
Aug 17 22:03:35.058: INFO: Pod "downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127458666s
Aug 17 22:03:37.066: INFO: Pod "downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135357282s
Aug 17 22:03:39.077: INFO: Pod "downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145681314s
Aug 17 22:03:41.317: INFO: Pod "downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.386045278s
STEP: Saw pod success
Aug 17 22:03:41.317: INFO: Pod "downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce" satisfied condition "success or failure"
Aug 17 22:03:41.533: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce container client-container: 
STEP: delete the pod
Aug 17 22:03:41.681: INFO: Waiting for pod downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce to disappear
Aug 17 22:03:41.700: INFO: Pod downwardapi-volume-e9c5bc7f-b040-41c1-b51e-39b80b0c6cce no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:03:41.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4429" for this suite.

• [SLOW TEST:9.077 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":955,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:03:41.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0817 22:03:49.976604       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 17 22:03:49.976: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:03:49.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3656" for this suite.

• [SLOW TEST:8.276 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":58,"skipped":978,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:03:49.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8536
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 17 22:03:50.409: INFO: Found 0 stateful pods, waiting for 3
Aug 17 22:04:00.418: INFO: Found 2 stateful pods, waiting for 3
Aug 17 22:04:10.417: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 22:04:10.417: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 22:04:10.417: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 22:04:10.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8536 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 17 22:04:11.921: INFO: stderr: "I0817 22:04:11.789205    1369 log.go:172] (0x4000115600) (0x4000a0c000) Create stream\nI0817 22:04:11.791483    1369 log.go:172] (0x4000115600) (0x4000a0c000) Stream added, broadcasting: 1\nI0817 22:04:11.803412    1369 log.go:172] (0x4000115600) Reply frame received for 1\nI0817 22:04:11.804484    1369 log.go:172] (0x4000115600) (0x4000a0c0a0) Create stream\nI0817 22:04:11.804564    1369 log.go:172] (0x4000115600) (0x4000a0c0a0) Stream added, broadcasting: 3\nI0817 22:04:11.807568    1369 log.go:172] (0x4000115600) Reply frame received for 3\nI0817 22:04:11.807914    1369 log.go:172] (0x4000115600) (0x4000b4e000) Create stream\nI0817 22:04:11.808001    1369 log.go:172] (0x4000115600) (0x4000b4e000) Stream added, broadcasting: 5\nI0817 22:04:11.809202    1369 log.go:172] (0x4000115600) Reply frame received for 5\nI0817 22:04:11.874165    1369 log.go:172] (0x4000115600) Data frame received for 5\nI0817 22:04:11.874448    1369 log.go:172] (0x4000b4e000) (5) Data frame handling\nI0817 22:04:11.875073    1369 log.go:172] (0x4000b4e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 22:04:11.904147    1369 log.go:172] (0x4000115600) Data frame received for 3\nI0817 22:04:11.904445    1369 log.go:172] (0x4000a0c0a0) (3) Data frame handling\nI0817 22:04:11.904603    1369 log.go:172] (0x4000a0c0a0) (3) Data frame sent\nI0817 22:04:11.904916    1369 log.go:172] (0x4000115600) Data frame received for 3\nI0817 22:04:11.905050    1369 log.go:172] (0x4000a0c0a0) (3) Data frame handling\nI0817 22:04:11.905163    1369 log.go:172] (0x4000115600) Data frame received for 5\nI0817 22:04:11.905282    1369 log.go:172] (0x4000b4e000) (5) Data frame handling\nI0817 22:04:11.906083    1369 log.go:172] (0x4000115600) Data frame received for 1\nI0817 22:04:11.906150    1369 log.go:172] (0x4000a0c000) (1) Data frame handling\nI0817 22:04:11.906205    1369 log.go:172] (0x4000a0c000) (1) Data frame sent\nI0817 22:04:11.906949    1369 log.go:172] (0x4000115600) (0x4000a0c000) Stream removed, broadcasting: 1\nI0817 22:04:11.908550    1369 log.go:172] (0x4000115600) Go away received\nI0817 22:04:11.911922    1369 log.go:172] (0x4000115600) (0x4000a0c000) Stream removed, broadcasting: 1\nI0817 22:04:11.912161    1369 log.go:172] (0x4000115600) (0x4000a0c0a0) Stream removed, broadcasting: 3\nI0817 22:04:11.912342    1369 log.go:172] (0x4000115600) (0x4000b4e000) Stream removed, broadcasting: 5\n"
Aug 17 22:04:11.922: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 17 22:04:11.922: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 17 22:04:21.969: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 17 22:04:32.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8536 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 22:04:33.522: INFO: stderr: "I0817 22:04:33.411790    1393 log.go:172] (0x4000a7c000) (0x4000a9e000) Create stream\nI0817 22:04:33.415047    1393 log.go:172] (0x4000a7c000) (0x4000a9e000) Stream added, broadcasting: 1\nI0817 22:04:33.425696    1393 log.go:172] (0x4000a7c000) Reply frame received for 1\nI0817 22:04:33.426490    1393 log.go:172] (0x4000a7c000) (0x40006848c0) Create stream\nI0817 22:04:33.426572    1393 log.go:172] (0x4000a7c000) (0x40006848c0) Stream added, broadcasting: 3\nI0817 22:04:33.428099    1393 log.go:172] (0x4000a7c000) Reply frame received for 3\nI0817 22:04:33.428337    1393 log.go:172] (0x4000a7c000) (0x40004a3680) Create stream\nI0817 22:04:33.428395    1393 log.go:172] (0x4000a7c000) (0x40004a3680) Stream added, broadcasting: 5\nI0817 22:04:33.429842    1393 log.go:172] (0x4000a7c000) Reply frame received for 5\nI0817 22:04:33.501071    1393 log.go:172] (0x4000a7c000) Data frame received for 5\nI0817 22:04:33.502101    1393 log.go:172] (0x4000a7c000) Data frame received for 3\nI0817 22:04:33.502280    1393 log.go:172] (0x40006848c0) (3) Data frame handling\nI0817 22:04:33.502415    1393 log.go:172] (0x4000a7c000) Data frame received for 1\nI0817 22:04:33.502582    1393 log.go:172] (0x4000a9e000) (1) Data frame handling\nI0817 22:04:33.502695    1393 log.go:172] (0x40004a3680) (5) Data frame handling\nI0817 22:04:33.503422    1393 log.go:172] (0x4000a9e000) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 22:04:33.504039    1393 log.go:172] (0x40006848c0) (3) Data frame sent\nI0817 22:04:33.504332    1393 log.go:172] (0x4000a7c000) Data frame received for 3\nI0817 22:04:33.504418    1393 log.go:172] (0x40006848c0) (3) Data frame handling\nI0817 22:04:33.504539    1393 log.go:172] (0x40004a3680) (5) Data frame sent\nI0817 22:04:33.504642    1393 log.go:172] (0x4000a7c000) Data frame received for 5\nI0817 22:04:33.504795    1393 log.go:172] (0x40004a3680) (5) Data frame handling\nI0817 22:04:33.505512    1393 log.go:172] (0x4000a7c000) (0x4000a9e000) Stream removed, broadcasting: 1\nI0817 22:04:33.508944    1393 log.go:172] (0x4000a7c000) Go away received\nI0817 22:04:33.511471    1393 log.go:172] (0x4000a7c000) (0x4000a9e000) Stream removed, broadcasting: 1\nI0817 22:04:33.511758    1393 log.go:172] (0x4000a7c000) (0x40006848c0) Stream removed, broadcasting: 3\nI0817 22:04:33.511959    1393 log.go:172] (0x4000a7c000) (0x40004a3680) Stream removed, broadcasting: 5\n"
Aug 17 22:04:33.523: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 17 22:04:33.523: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 17 22:04:43.682: INFO: Waiting for StatefulSet statefulset-8536/ss2 to complete update
Aug 17 22:04:43.683: INFO: Waiting for Pod statefulset-8536/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 17 22:04:43.683: INFO: Waiting for Pod statefulset-8536/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 17 22:04:53.765: INFO: Waiting for StatefulSet statefulset-8536/ss2 to complete update
Aug 17 22:04:53.765: INFO: Waiting for Pod statefulset-8536/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 17 22:05:03.773: INFO: Waiting for StatefulSet statefulset-8536/ss2 to complete update
STEP: Rolling back to a previous revision
Aug 17 22:05:13.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8536 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 17 22:05:15.371: INFO: stderr: "I0817 22:05:15.119885    1417 log.go:172] (0x4000712000) (0x40005bf4a0) Create stream\nI0817 22:05:15.122880    1417 log.go:172] (0x4000712000) (0x40005bf4a0) Stream added, broadcasting: 1\nI0817 22:05:15.134221    1417 log.go:172] (0x4000712000) Reply frame received for 1\nI0817 22:05:15.135015    1417 log.go:172] (0x4000712000) (0x40006c80a0) Create stream\nI0817 22:05:15.135094    1417 log.go:172] (0x4000712000) (0x40006c80a0) Stream added, broadcasting: 3\nI0817 22:05:15.137050    1417 log.go:172] (0x4000712000) Reply frame received for 3\nI0817 22:05:15.137568    1417 log.go:172] (0x4000712000) (0x4000764000) Create stream\nI0817 22:05:15.137680    1417 log.go:172] (0x4000712000) (0x4000764000) Stream added, broadcasting: 5\nI0817 22:05:15.139296    1417 log.go:172] (0x4000712000) Reply frame received for 5\nI0817 22:05:15.219921    1417 log.go:172] (0x4000712000) Data frame received for 5\nI0817 22:05:15.220311    1417 log.go:172] (0x4000764000) (5) Data frame handling\nI0817 22:05:15.221200    1417 log.go:172] (0x4000764000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 22:05:15.346368    1417 log.go:172] (0x4000712000) Data frame received for 3\nI0817 22:05:15.346564    1417 log.go:172] (0x40006c80a0) (3) Data frame handling\nI0817 22:05:15.346687    1417 log.go:172] (0x40006c80a0) (3) Data frame sent\nI0817 22:05:15.346807    1417 log.go:172] (0x4000712000) Data frame received for 3\nI0817 22:05:15.347069    1417 log.go:172] (0x40006c80a0) (3) Data frame handling\nI0817 22:05:15.347488    1417 log.go:172] (0x4000712000) Data frame received for 5\nI0817 22:05:15.347696    1417 log.go:172] (0x4000764000) (5) Data frame handling\nI0817 22:05:15.348304    1417 log.go:172] (0x4000712000) Data frame received for 1\nI0817 22:05:15.348478    1417 log.go:172] (0x40005bf4a0) (1) Data frame handling\nI0817 22:05:15.348590    1417 log.go:172] (0x40005bf4a0) (1) Data frame sent\nI0817 22:05:15.350620    1417 log.go:172] (0x4000712000) (0x40005bf4a0) Stream removed, broadcasting: 1\nI0817 22:05:15.355492    1417 log.go:172] (0x4000712000) Go away received\nI0817 22:05:15.357818    1417 log.go:172] (0x4000712000) (0x40005bf4a0) Stream removed, broadcasting: 1\nI0817 22:05:15.358466    1417 log.go:172] (0x4000712000) (0x40006c80a0) Stream removed, broadcasting: 3\nI0817 22:05:15.358736    1417 log.go:172] (0x4000712000) (0x4000764000) Stream removed, broadcasting: 5\n"
Aug 17 22:05:15.372: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 17 22:05:15.372: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 17 22:05:25.415: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 17 22:05:35.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8536 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 22:05:37.030: INFO: stderr: "I0817 22:05:36.936012    1440 log.go:172] (0x400011f6b0) (0x40007e7e00) Create stream\nI0817 22:05:36.940505    1440 log.go:172] (0x400011f6b0) (0x40007e7e00) Stream added, broadcasting: 1\nI0817 22:05:36.952316    1440 log.go:172] (0x400011f6b0) Reply frame received for 1\nI0817 22:05:36.952975    1440 log.go:172] (0x400011f6b0) (0x40007e7ea0) Create stream\nI0817 22:05:36.953035    1440 log.go:172] (0x400011f6b0) (0x40007e7ea0) Stream added, broadcasting: 3\nI0817 22:05:36.955088    1440 log.go:172] (0x400011f6b0) Reply frame received for 3\nI0817 22:05:36.955415    1440 log.go:172] (0x400011f6b0) (0x4000712000) Create stream\nI0817 22:05:36.955492    1440 log.go:172] (0x400011f6b0) (0x4000712000) Stream added, broadcasting: 5\nI0817 22:05:36.956577    1440 log.go:172] (0x400011f6b0) Reply frame received for 5\nI0817 22:05:37.006634    1440 log.go:172] (0x400011f6b0) Data frame received for 5\nI0817 22:05:37.007123    1440 log.go:172] (0x400011f6b0) Data frame received for 3\nI0817 22:05:37.007246    1440 log.go:172] (0x40007e7ea0) (3) Data frame handling\nI0817 22:05:37.007336    1440 log.go:172] (0x400011f6b0) Data frame received for 1\nI0817 22:05:37.007444    1440 log.go:172] (0x40007e7e00) (1) Data frame handling\nI0817 22:05:37.007543    1440 log.go:172] (0x4000712000) (5) Data frame handling\nI0817 22:05:37.008106    1440 log.go:172] (0x40007e7ea0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 22:05:37.009143    1440 log.go:172] (0x400011f6b0) Data frame received for 3\nI0817 22:05:37.009233    1440 log.go:172] (0x4000712000) (5) Data frame sent\nI0817 22:05:37.009372    1440 log.go:172] (0x400011f6b0) Data frame received for 5\nI0817 22:05:37.009624    1440 log.go:172] (0x4000712000) (5) Data frame handling\nI0817 22:05:37.010088    1440 log.go:172] (0x40007e7ea0) (3) Data frame handling\nI0817 22:05:37.010433    1440 log.go:172] (0x40007e7e00) (1) Data frame sent\nI0817 22:05:37.011975    1440 log.go:172] (0x400011f6b0) (0x40007e7e00) Stream removed, broadcasting: 1\nI0817 22:05:37.013581    1440 log.go:172] (0x400011f6b0) Go away received\nI0817 22:05:37.017632    1440 log.go:172] (0x400011f6b0) (0x40007e7e00) Stream removed, broadcasting: 1\nI0817 22:05:37.018064    1440 log.go:172] (0x400011f6b0) (0x40007e7ea0) Stream removed, broadcasting: 3\nI0817 22:05:37.018358    1440 log.go:172] (0x400011f6b0) (0x4000712000) Stream removed, broadcasting: 5\n"
Aug 17 22:05:37.031: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 17 22:05:37.031: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 17 22:05:47.129: INFO: Waiting for StatefulSet statefulset-8536/ss2 to complete update
Aug 17 22:05:47.129: INFO: Waiting for Pod statefulset-8536/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 17 22:05:47.129: INFO: Waiting for Pod statefulset-8536/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 17 22:05:47.129: INFO: Waiting for Pod statefulset-8536/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 17 22:05:57.139: INFO: Waiting for StatefulSet statefulset-8536/ss2 to complete update
Aug 17 22:05:57.140: INFO: Waiting for Pod statefulset-8536/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 17 22:05:57.140: INFO: Waiting for Pod statefulset-8536/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 17 22:06:07.388: INFO: Waiting for StatefulSet statefulset-8536/ss2 to complete update
Aug 17 22:06:07.388: INFO: Waiting for Pod statefulset-8536/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 17 22:06:17.143: INFO: Deleting all statefulset in ns statefulset-8536
Aug 17 22:06:17.148: INFO: Scaling statefulset ss2 to 0
Aug 17 22:06:47.170: INFO: Waiting for statefulset status.replicas updated to 0
Aug 17 22:06:47.174: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:06:47.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8536" for this suite.

• [SLOW TEST:177.306 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":59,"skipped":985,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:06:47.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 17 22:06:47.501: INFO: Waiting up to 5m0s for pod "pod-c3207418-4694-4814-b5f4-d7577fae8e64" in namespace "emptydir-103" to be "success or failure"
Aug 17 22:06:47.624: INFO: Pod "pod-c3207418-4694-4814-b5f4-d7577fae8e64": Phase="Pending", Reason="", readiness=false. Elapsed: 122.113028ms
Aug 17 22:06:49.631: INFO: Pod "pod-c3207418-4694-4814-b5f4-d7577fae8e64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129376345s
Aug 17 22:06:51.760: INFO: Pod "pod-c3207418-4694-4814-b5f4-d7577fae8e64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258695964s
Aug 17 22:06:53.796: INFO: Pod "pod-c3207418-4694-4814-b5f4-d7577fae8e64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294895602s
Aug 17 22:06:55.822: INFO: Pod "pod-c3207418-4694-4814-b5f4-d7577fae8e64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.320400727s
STEP: Saw pod success
Aug 17 22:06:55.822: INFO: Pod "pod-c3207418-4694-4814-b5f4-d7577fae8e64" satisfied condition "success or failure"
Aug 17 22:06:55.827: INFO: Trying to get logs from node jerma-worker pod pod-c3207418-4694-4814-b5f4-d7577fae8e64 container test-container: 
STEP: delete the pod
Aug 17 22:06:56.063: INFO: Waiting for pod pod-c3207418-4694-4814-b5f4-d7577fae8e64 to disappear
Aug 17 22:06:56.131: INFO: Pod pod-c3207418-4694-4814-b5f4-d7577fae8e64 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:06:56.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-103" for this suite.

• [SLOW TEST:8.844 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1008,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:06:56.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-bda5bab5-9859-4d94-91d3-edda5e0b9187
STEP: Creating configMap with name cm-test-opt-upd-f3409def-d175-4dca-adc3-42f52f07bf90
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-bda5bab5-9859-4d94-91d3-edda5e0b9187
STEP: Updating configmap cm-test-opt-upd-f3409def-d175-4dca-adc3-42f52f07bf90
STEP: Creating configMap with name cm-test-opt-create-f055ed64-c59e-41b7-a887-8ac18d2603e0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:07:07.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9563" for this suite.

• [SLOW TEST:10.907 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1021,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:07:07.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0817 22:07:48.256339       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 17 22:07:48.256: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:07:48.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5873" for this suite.

• [SLOW TEST:41.212 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":62,"skipped":1027,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:07:48.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:07:51.896: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:07:54.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298871, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298871, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298872, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733298871, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:07:58.662: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:07:59.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3558-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:08:04.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9968" for this suite.
STEP: Destroying namespace "webhook-9968-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:21.098 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":63,"skipped":1027,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:08:09.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 22:08:10.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe" in namespace "downward-api-1378" to be "success or failure"
Aug 17 22:08:11.377: INFO: Pod "downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe": Phase="Pending", Reason="", readiness=false. Elapsed: 658.555549ms
Aug 17 22:08:13.595: INFO: Pod "downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.87678706s
Aug 17 22:08:16.059: INFO: Pod "downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe": Phase="Pending", Reason="", readiness=false. Elapsed: 5.340250201s
Aug 17 22:08:18.170: INFO: Pod "downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe": Phase="Pending", Reason="", readiness=false. Elapsed: 7.451335685s
Aug 17 22:08:20.177: INFO: Pod "downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe": Phase="Running", Reason="", readiness=true. Elapsed: 9.45821876s
Aug 17 22:08:22.457: INFO: Pod "downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.739042414s
STEP: Saw pod success
Aug 17 22:08:22.458: INFO: Pod "downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe" satisfied condition "success or failure"
Aug 17 22:08:22.462: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe container client-container: 
STEP: delete the pod
Aug 17 22:08:22.533: INFO: Waiting for pod downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe to disappear
Aug 17 22:08:22.750: INFO: Pod downwardapi-volume-51904873-92c1-4a2e-8fe2-b315f05454fe no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:08:22.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1378" for this suite.

• [SLOW TEST:13.398 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1037,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:08:22.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 17 22:08:23.366: INFO: >>> kubeConfig: /root/.kube/config
Aug 17 22:08:42.893: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:09:50.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6887" for this suite.

• [SLOW TEST:87.396 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":65,"skipped":1055,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:09:50.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 17 22:09:55.799: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:09:55.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2730" for this suite.

• [SLOW TEST:5.837 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1086,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:09:56.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 17 22:10:00.798: INFO: Successfully updated pod "labelsupdate9eed3f68-7df0-442c-965d-d01be4c28b58"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:10:04.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1205" for this suite.

• [SLOW TEST:8.888 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1096,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:10:04.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-731e7a8b-63c3-4176-8e2d-ffa819bad347
STEP: Creating a pod to test consume configMaps
Aug 17 22:10:04.991: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8ddd800-94ac-487a-894b-a1d4de85fd7a" in namespace "projected-5081" to be "success or failure"
Aug 17 22:10:05.002: INFO: Pod "pod-projected-configmaps-b8ddd800-94ac-487a-894b-a1d4de85fd7a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.950171ms
Aug 17 22:10:07.040: INFO: Pod "pod-projected-configmaps-b8ddd800-94ac-487a-894b-a1d4de85fd7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048489041s
Aug 17 22:10:09.064: INFO: Pod "pod-projected-configmaps-b8ddd800-94ac-487a-894b-a1d4de85fd7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072477415s
Aug 17 22:10:11.268: INFO: Pod "pod-projected-configmaps-b8ddd800-94ac-487a-894b-a1d4de85fd7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.276531842s
STEP: Saw pod success
Aug 17 22:10:11.268: INFO: Pod "pod-projected-configmaps-b8ddd800-94ac-487a-894b-a1d4de85fd7a" satisfied condition "success or failure"
Aug 17 22:10:11.537: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b8ddd800-94ac-487a-894b-a1d4de85fd7a container projected-configmap-volume-test: 
STEP: delete the pod
Aug 17 22:10:11.826: INFO: Waiting for pod pod-projected-configmaps-b8ddd800-94ac-487a-894b-a1d4de85fd7a to disappear
Aug 17 22:10:11.914: INFO: Pod pod-projected-configmaps-b8ddd800-94ac-487a-894b-a1d4de85fd7a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:10:11.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5081" for this suite.

• [SLOW TEST:7.123 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1127,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:10:12.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 17 22:10:12.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:12:07.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9057" for this suite.

• [SLOW TEST:115.403 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":69,"skipped":1158,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:12:07.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-c2dfe937-ef3f-4460-85c5-4eb7f9a1d223 in namespace container-probe-7556
Aug 17 22:12:16.425: INFO: Started pod busybox-c2dfe937-ef3f-4460-85c5-4eb7f9a1d223 in namespace container-probe-7556
STEP: checking the pod's current state and verifying that restartCount is present
Aug 17 22:12:16.486: INFO: Initial restart count of pod busybox-c2dfe937-ef3f-4460-85c5-4eb7f9a1d223 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:16:16.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7556" for this suite.

• [SLOW TEST:249.329 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1166,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:16:16.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-ae5c7511-8858-439f-9384-f9f3099eb557
STEP: Creating a pod to test consume configMaps
Aug 17 22:16:16.865: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98" in namespace "projected-5785" to be "success or failure"
Aug 17 22:16:16.883: INFO: Pod "pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98": Phase="Pending", Reason="", readiness=false. Elapsed: 17.776624ms
Aug 17 22:16:18.888: INFO: Pod "pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023555616s
Aug 17 22:16:20.991: INFO: Pod "pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126644382s
Aug 17 22:16:23.099: INFO: Pod "pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98": Phase="Running", Reason="", readiness=true. Elapsed: 6.234324082s
Aug 17 22:16:25.106: INFO: Pod "pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.24073276s
STEP: Saw pod success
Aug 17 22:16:25.106: INFO: Pod "pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98" satisfied condition "success or failure"
Aug 17 22:16:25.110: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 17 22:16:25.156: INFO: Waiting for pod pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98 to disappear
Aug 17 22:16:25.179: INFO: Pod pod-projected-configmaps-6e2a6124-4f2a-4897-a219-0a95dd119a98 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:16:25.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5785" for this suite.

• [SLOW TEST:8.441 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1167,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:16:25.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-91b2dacf-77af-4540-a74f-e30053843714
STEP: Creating a pod to test consume configMaps
Aug 17 22:16:25.440: INFO: Waiting up to 5m0s for pod "pod-configmaps-792e012e-d728-48a1-be5a-800a8d1ccbe4" in namespace "configmap-8260" to be "success or failure"
Aug 17 22:16:25.472: INFO: Pod "pod-configmaps-792e012e-d728-48a1-be5a-800a8d1ccbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.035171ms
Aug 17 22:16:27.767: INFO: Pod "pod-configmaps-792e012e-d728-48a1-be5a-800a8d1ccbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326658782s
Aug 17 22:16:29.774: INFO: Pod "pod-configmaps-792e012e-d728-48a1-be5a-800a8d1ccbe4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333366971s
Aug 17 22:16:31.781: INFO: Pod "pod-configmaps-792e012e-d728-48a1-be5a-800a8d1ccbe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.34027028s
STEP: Saw pod success
Aug 17 22:16:31.781: INFO: Pod "pod-configmaps-792e012e-d728-48a1-be5a-800a8d1ccbe4" satisfied condition "success or failure"
Aug 17 22:16:31.786: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-792e012e-d728-48a1-be5a-800a8d1ccbe4 container configmap-volume-test: 
STEP: delete the pod
Aug 17 22:16:32.149: INFO: Waiting for pod pod-configmaps-792e012e-d728-48a1-be5a-800a8d1ccbe4 to disappear
Aug 17 22:16:32.215: INFO: Pod pod-configmaps-792e012e-d728-48a1-be5a-800a8d1ccbe4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:16:32.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8260" for this suite.

• [SLOW TEST:7.031 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1199,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:16:32.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 17 22:16:37.420: INFO: Successfully updated pod "labelsupdate7cc18dc8-66e0-4e00-b929-cb55b5321e69"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:16:41.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-904" for this suite.

• [SLOW TEST:9.675 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1209,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:16:41.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 17 22:16:42.088: INFO: Waiting up to 5m0s for pod "pod-344b05e4-522f-42b6-b164-bd8c76ac4f1c" in namespace "emptydir-8403" to be "success or failure"
Aug 17 22:16:42.366: INFO: Pod "pod-344b05e4-522f-42b6-b164-bd8c76ac4f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 277.970538ms
Aug 17 22:16:44.373: INFO: Pod "pod-344b05e4-522f-42b6-b164-bd8c76ac4f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285364028s
Aug 17 22:16:46.512: INFO: Pod "pod-344b05e4-522f-42b6-b164-bd8c76ac4f1c": Phase="Running", Reason="", readiness=true. Elapsed: 4.423703425s
Aug 17 22:16:48.519: INFO: Pod "pod-344b05e4-522f-42b6-b164-bd8c76ac4f1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.430815413s
STEP: Saw pod success
Aug 17 22:16:48.519: INFO: Pod "pod-344b05e4-522f-42b6-b164-bd8c76ac4f1c" satisfied condition "success or failure"
Aug 17 22:16:48.523: INFO: Trying to get logs from node jerma-worker pod pod-344b05e4-522f-42b6-b164-bd8c76ac4f1c container test-container: 
STEP: delete the pod
Aug 17 22:16:48.985: INFO: Waiting for pod pod-344b05e4-522f-42b6-b164-bd8c76ac4f1c to disappear
Aug 17 22:16:49.037: INFO: Pod pod-344b05e4-522f-42b6-b164-bd8c76ac4f1c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:16:49.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8403" for this suite.

• [SLOW TEST:7.146 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1219,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:16:49.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 17 22:16:49.235: INFO: Waiting up to 5m0s for pod "downward-api-4bcb46a0-d914-40d5-a16f-e24127fd191e" in namespace "downward-api-6015" to be "success or failure"
Aug 17 22:16:49.278: INFO: Pod "downward-api-4bcb46a0-d914-40d5-a16f-e24127fd191e": Phase="Pending", Reason="", readiness=false. Elapsed: 42.783167ms
Aug 17 22:16:51.351: INFO: Pod "downward-api-4bcb46a0-d914-40d5-a16f-e24127fd191e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115128765s
Aug 17 22:16:53.358: INFO: Pod "downward-api-4bcb46a0-d914-40d5-a16f-e24127fd191e": Phase="Running", Reason="", readiness=true. Elapsed: 4.122517843s
Aug 17 22:16:55.366: INFO: Pod "downward-api-4bcb46a0-d914-40d5-a16f-e24127fd191e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130172571s
STEP: Saw pod success
Aug 17 22:16:55.366: INFO: Pod "downward-api-4bcb46a0-d914-40d5-a16f-e24127fd191e" satisfied condition "success or failure"
Aug 17 22:16:55.372: INFO: Trying to get logs from node jerma-worker2 pod downward-api-4bcb46a0-d914-40d5-a16f-e24127fd191e container dapi-container: 
STEP: delete the pod
Aug 17 22:16:55.420: INFO: Waiting for pod downward-api-4bcb46a0-d914-40d5-a16f-e24127fd191e to disappear
Aug 17 22:16:55.431: INFO: Pod downward-api-4bcb46a0-d914-40d5-a16f-e24127fd191e no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:16:55.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6015" for this suite.

• [SLOW TEST:6.464 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1235,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:16:55.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug 17 22:16:55.647: INFO: >>> kubeConfig: /root/.kube/config
Aug 17 22:17:05.638: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:18:22.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7733" for this suite.

• [SLOW TEST:87.141 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":76,"skipped":1254,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:18:22.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Aug 17 22:18:24.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9435'
Aug 17 22:18:36.012: INFO: stderr: ""
Aug 17 22:18:36.012: INFO: stdout: "pod/pause created\n"
Aug 17 22:18:36.012: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 17 22:18:36.012: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9435" to be "running and ready"
Aug 17 22:18:36.119: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 107.067503ms
Aug 17 22:18:38.127: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114681094s
Aug 17 22:18:40.134: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121576239s
Aug 17 22:18:42.141: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.128439459s
Aug 17 22:18:42.141: INFO: Pod "pause" satisfied condition "running and ready"
Aug 17 22:18:42.141: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 17 22:18:42.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9435'
Aug 17 22:18:43.421: INFO: stderr: ""
Aug 17 22:18:43.421: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 17 22:18:43.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9435'
Aug 17 22:18:44.675: INFO: stderr: ""
Aug 17 22:18:44.675: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 17 22:18:44.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9435'
Aug 17 22:18:46.130: INFO: stderr: ""
Aug 17 22:18:46.131: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 17 22:18:46.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9435'
Aug 17 22:18:47.366: INFO: stderr: ""
Aug 17 22:18:47.366: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          12s   \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Aug 17 22:18:47.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9435'
Aug 17 22:18:48.663: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 17 22:18:48.664: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 17 22:18:48.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9435'
Aug 17 22:18:49.927: INFO: stderr: "No resources found in kubectl-9435 namespace.\n"
Aug 17 22:18:49.927: INFO: stdout: ""
Aug 17 22:18:49.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9435 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 17 22:18:51.215: INFO: stderr: ""
Aug 17 22:18:51.216: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:18:51.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9435" for this suite.

• [SLOW TEST:28.565 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":77,"skipped":1255,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:18:51.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 17 22:18:52.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7644'
Aug 17 22:18:53.495: INFO: stderr: ""
Aug 17 22:18:53.495: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 17 22:19:03.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7644 -o json'
Aug 17 22:19:04.748: INFO: stderr: ""
Aug 17 22:19:04.748: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-17T22:18:53Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-7644\",\n        \"resourceVersion\": \"883345\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7644/pods/e2e-test-httpd-pod\",\n        \"uid\": \"c371c6d8-7aab-47b4-ad7d-9d831e00b22d\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-ctpj5\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-ctpj5\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-ctpj5\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-17T22:18:53Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-17T22:18:59Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-17T22:18:59Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-17T22:18:53Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://799a43cd65579242b237f7628e1b8bc346228d4ed195146e3939294e982ebad1\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-17T22:18:58Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.3\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.74\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.74\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-17T22:18:53Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 17 22:19:04.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7644'
Aug 17 22:19:06.363: INFO: stderr: ""
Aug 17 22:19:06.363: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Aug 17 22:19:06.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7644'
Aug 17 22:19:12.490: INFO: stderr: ""
Aug 17 22:19:12.490: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:19:12.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7644" for this suite.

• [SLOW TEST:21.285 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":78,"skipped":1261,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:19:12.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Aug 17 22:19:12.716: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:19:13.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8484" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":79,"skipped":1270,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:19:13.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Aug 17 22:19:14.213: INFO: Waiting up to 5m0s for pod "client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1" in namespace "containers-5889" to be "success or failure"
Aug 17 22:19:14.218: INFO: Pod "client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468332ms
Aug 17 22:19:16.293: INFO: Pod "client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080050419s
Aug 17 22:19:18.299: INFO: Pod "client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086008875s
Aug 17 22:19:20.496: INFO: Pod "client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28277263s
Aug 17 22:19:22.503: INFO: Pod "client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.289918935s
STEP: Saw pod success
Aug 17 22:19:22.504: INFO: Pod "client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1" satisfied condition "success or failure"
Aug 17 22:19:22.508: INFO: Trying to get logs from node jerma-worker pod client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1 container test-container: 
STEP: delete the pod
Aug 17 22:19:22.544: INFO: Waiting for pod client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1 to disappear
Aug 17 22:19:22.548: INFO: Pod client-containers-e8159165-25f6-4efb-a989-fded7d3d94d1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:19:22.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5889" for this suite.

• [SLOW TEST:8.667 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1311,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:19:22.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Aug 17 22:19:22.660: INFO: Waiting up to 5m0s for pod "var-expansion-175cbb77-df30-422d-903b-ab4ae435970c" in namespace "var-expansion-7985" to be "success or failure"
Aug 17 22:19:22.688: INFO: Pod "var-expansion-175cbb77-df30-422d-903b-ab4ae435970c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.907819ms
Aug 17 22:19:24.695: INFO: Pod "var-expansion-175cbb77-df30-422d-903b-ab4ae435970c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034699927s
Aug 17 22:19:26.701: INFO: Pod "var-expansion-175cbb77-df30-422d-903b-ab4ae435970c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04093422s
STEP: Saw pod success
Aug 17 22:19:26.701: INFO: Pod "var-expansion-175cbb77-df30-422d-903b-ab4ae435970c" satisfied condition "success or failure"
Aug 17 22:19:26.706: INFO: Trying to get logs from node jerma-worker pod var-expansion-175cbb77-df30-422d-903b-ab4ae435970c container dapi-container: 
STEP: delete the pod
Aug 17 22:19:27.127: INFO: Waiting for pod var-expansion-175cbb77-df30-422d-903b-ab4ae435970c to disappear
Aug 17 22:19:27.153: INFO: Pod var-expansion-175cbb77-df30-422d-903b-ab4ae435970c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:19:27.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7985" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1313,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:19:27.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:19:27.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9843'
Aug 17 22:19:28.854: INFO: stderr: ""
Aug 17 22:19:28.854: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 17 22:19:28.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9843'
Aug 17 22:19:30.952: INFO: stderr: ""
Aug 17 22:19:30.953: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 17 22:19:31.961: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:19:31.962: INFO: Found 0 / 1
Aug 17 22:19:33.143: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:19:33.143: INFO: Found 0 / 1
Aug 17 22:19:33.962: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:19:33.962: INFO: Found 1 / 1
Aug 17 22:19:33.962: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 17 22:19:33.969: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:19:33.969: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 17 22:19:33.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-zbwzh --namespace=kubectl-9843'
Aug 17 22:19:35.304: INFO: stderr: ""
Aug 17 22:19:35.304: INFO: stdout: "Name:         agnhost-master-zbwzh\nNamespace:    kubectl-9843\nPriority:     0\nNode:         jerma-worker/172.18.0.6\nStart Time:   Mon, 17 Aug 2020 22:19:29 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.25\nIPs:\n  IP:           10.244.2.25\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://4aad99e717b9f6c53f10ff65ae8f2e3027776468e12b78c4cd19abda3d586af8\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 17 Aug 2020 22:19:32 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-plvc7 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-plvc7:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-plvc7\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  6s    default-scheduler      Successfully assigned kubectl-9843/agnhost-master-zbwzh to jerma-worker\n  Normal  Pulled     5s    kubelet, jerma-worker  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    3s    kubelet, jerma-worker  Created container agnhost-master\n  Normal  Started    2s    kubelet, jerma-worker  Started container agnhost-master\n"
Aug 17 22:19:35.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9843'
Aug 17 22:19:36.714: INFO: stderr: ""
Aug 17 22:19:36.715: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-9843\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-zbwzh\n"
Aug 17 22:19:36.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9843'
Aug 17 22:19:38.029: INFO: stderr: ""
Aug 17 22:19:38.029: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-9843\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.100.247.88\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.25:6379\nSession Affinity:  None\nEvents:            \n"
Aug 17 22:19:38.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Aug 17 22:19:39.426: INFO: stderr: ""
Aug 17 22:19:39.426: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:37:06 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Mon, 17 Aug 2020 22:19:39 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 17 Aug 2020 22:14:51 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 17 Aug 2020 22:14:51 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 17 Aug 2020 22:14:51 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 17 Aug 2020 22:14:51 +0000   Sat, 15 Aug 2020 09:37:40 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.10\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 e52c45bc589d48d995e8fd79ad5bf250\n  System UUID:                b981bdc7-d264-48ef-ab5e-3308e23aaf13\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-bvrm4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     2d12h\n  kube-system                 coredns-6955765f44-db8rh                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     2d12h\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d12h\n  kube-system                 kindnet-j88mt                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      2d12h\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         2d12h\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         2d12h\n  kube-system                 kube-proxy-hmb6l                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d12h\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         2d12h\n  local-path-storage          local-path-provisioner-58f6947c7-p2cqw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d12h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 17 22:19:39.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9843'
Aug 17 22:19:40.724: INFO: stderr: ""
Aug 17 22:19:40.724: INFO: stdout: "Name:         kubectl-9843\nLabels:       e2e-framework=kubectl\n              e2e-run=390d212d-e9c9-47f2-91e3-5c34330eddb6\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:19:40.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9843" for this suite.

• [SLOW TEST:13.549 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":82,"skipped":1323,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:19:40.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Aug 17 22:19:40.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2704 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 17 22:19:45.777: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0817 22:19:45.622089    1944 log.go:172] (0x400001cc60) (0x4000827b80) Create stream\nI0817 22:19:45.627069    1944 log.go:172] (0x400001cc60) (0x4000827b80) Stream added, broadcasting: 1\nI0817 22:19:45.642146    1944 log.go:172] (0x400001cc60) Reply frame received for 1\nI0817 22:19:45.643432    1944 log.go:172] (0x400001cc60) (0x400055f360) Create stream\nI0817 22:19:45.643564    1944 log.go:172] (0x400001cc60) (0x400055f360) Stream added, broadcasting: 3\nI0817 22:19:45.645901    1944 log.go:172] (0x400001cc60) Reply frame received for 3\nI0817 22:19:45.646517    1944 log.go:172] (0x400001cc60) (0x40005a6000) Create stream\nI0817 22:19:45.646636    1944 log.go:172] (0x400001cc60) (0x40005a6000) Stream added, broadcasting: 5\nI0817 22:19:45.648488    1944 log.go:172] (0x400001cc60) Reply frame received for 5\nI0817 22:19:45.648786    1944 log.go:172] (0x400001cc60) (0x40005a60a0) Create stream\nI0817 22:19:45.648875    1944 log.go:172] (0x400001cc60) (0x40005a60a0) Stream added, broadcasting: 7\nI0817 22:19:45.650764    1944 log.go:172] (0x400001cc60) Reply frame received for 7\nI0817 22:19:45.655630    1944 log.go:172] (0x400055f360) (3) Writing data frame\nI0817 22:19:45.657989    1944 log.go:172] (0x400055f360) (3) Writing data frame\nI0817 22:19:45.659119    1944 log.go:172] (0x400001cc60) Data frame received for 5\nI0817 22:19:45.659301    1944 log.go:172] (0x40005a6000) (5) Data frame handling\nI0817 22:19:45.659576    1944 log.go:172] (0x40005a6000) (5) Data frame sent\nI0817 22:19:45.659982    1944 log.go:172] (0x400001cc60) Data frame received for 5\nI0817 22:19:45.660097    1944 log.go:172] (0x40005a6000) (5) Data frame handling\nI0817 22:19:45.660191    1944 log.go:172] (0x40005a6000) (5) Data frame sent\nI0817 22:19:45.695428    1944 log.go:172] (0x400001cc60) Data frame received for 7\nI0817 22:19:45.695664    1944 log.go:172] (0x40005a60a0) (7) Data frame handling\nI0817 22:19:45.695973    1944 log.go:172] (0x400001cc60) Data frame received for 5\nI0817 22:19:45.696237    1944 log.go:172] (0x40005a6000) (5) Data frame handling\nI0817 22:19:45.696462    1944 log.go:172] (0x400001cc60) Data frame received for 1\nI0817 22:19:45.696606    1944 log.go:172] (0x4000827b80) (1) Data frame handling\nI0817 22:19:45.696855    1944 log.go:172] (0x4000827b80) (1) Data frame sent\nI0817 22:19:45.698726    1944 log.go:172] (0x400001cc60) (0x4000827b80) Stream removed, broadcasting: 1\nI0817 22:19:45.700139    1944 log.go:172] (0x400001cc60) (0x400055f360) Stream removed, broadcasting: 3\nI0817 22:19:45.701318    1944 log.go:172] (0x400001cc60) Go away received\nI0817 22:19:45.705089    1944 log.go:172] (0x400001cc60) (0x4000827b80) Stream removed, broadcasting: 1\nI0817 22:19:45.705340    1944 log.go:172] (0x400001cc60) (0x400055f360) Stream removed, broadcasting: 3\nI0817 22:19:45.705412    1944 log.go:172] (0x400001cc60) (0x40005a6000) Stream removed, broadcasting: 5\nI0817 22:19:45.705576    1944 log.go:172] (0x400001cc60) (0x40005a60a0) Stream removed, broadcasting: 7\n"
Aug 17 22:19:45.778: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:19:47.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2704" for this suite.

• [SLOW TEST:7.061 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":83,"skipped":1329,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:19:47.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 22:19:47.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ba2c93b-0b56-4cc9-81de-b2929db943b1" in namespace "downward-api-3798" to be "success or failure"
Aug 17 22:19:47.992: INFO: Pod "downwardapi-volume-8ba2c93b-0b56-4cc9-81de-b2929db943b1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.315612ms
Aug 17 22:19:50.000: INFO: Pod "downwardapi-volume-8ba2c93b-0b56-4cc9-81de-b2929db943b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045913458s
Aug 17 22:19:52.007: INFO: Pod "downwardapi-volume-8ba2c93b-0b56-4cc9-81de-b2929db943b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052950996s
STEP: Saw pod success
Aug 17 22:19:52.007: INFO: Pod "downwardapi-volume-8ba2c93b-0b56-4cc9-81de-b2929db943b1" satisfied condition "success or failure"
Aug 17 22:19:52.011: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8ba2c93b-0b56-4cc9-81de-b2929db943b1 container client-container: 
STEP: delete the pod
Aug 17 22:19:52.051: INFO: Waiting for pod downwardapi-volume-8ba2c93b-0b56-4cc9-81de-b2929db943b1 to disappear
Aug 17 22:19:52.063: INFO: Pod downwardapi-volume-8ba2c93b-0b56-4cc9-81de-b2929db943b1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:19:52.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3798" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1338,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:19:52.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:19:55.721: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:19:58.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299595, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299595, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299595, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299595, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:20:00.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299595, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299595, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299595, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299595, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:20:03.072: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:20:03.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1385" for this suite.
STEP: Destroying namespace "webhook-1385-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.044 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":85,"skipped":1342,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:20:04.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 17 22:20:07.865: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:20:21.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4207" for this suite.

• [SLOW TEST:17.598 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:20:21.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 17 22:20:21.845: INFO: Waiting up to 5m0s for pod "pod-4aa7098d-8ad0-4bf5-a945-e1deb5e655e6" in namespace "emptydir-6484" to be "success or failure"
Aug 17 22:20:21.855: INFO: Pod "pod-4aa7098d-8ad0-4bf5-a945-e1deb5e655e6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.780356ms
Aug 17 22:20:23.862: INFO: Pod "pod-4aa7098d-8ad0-4bf5-a945-e1deb5e655e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016769636s
Aug 17 22:20:25.869: INFO: Pod "pod-4aa7098d-8ad0-4bf5-a945-e1deb5e655e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02384871s
STEP: Saw pod success
Aug 17 22:20:25.869: INFO: Pod "pod-4aa7098d-8ad0-4bf5-a945-e1deb5e655e6" satisfied condition "success or failure"
Aug 17 22:20:25.874: INFO: Trying to get logs from node jerma-worker pod pod-4aa7098d-8ad0-4bf5-a945-e1deb5e655e6 container test-container: 
STEP: delete the pod
Aug 17 22:20:26.013: INFO: Waiting for pod pod-4aa7098d-8ad0-4bf5-a945-e1deb5e655e6 to disappear
Aug 17 22:20:26.022: INFO: Pod pod-4aa7098d-8ad0-4bf5-a945-e1deb5e655e6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:20:26.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6484" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1389,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:20:26.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:20:29.375: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:20:31.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299629, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299629, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299629, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299629, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:20:33.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299629, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299629, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299629, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299629, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:20:36.680: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:20:36.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4077" for this suite.
STEP: Destroying namespace "webhook-4077-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.563 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":88,"skipped":1419,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:20:37.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:20:40.691: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:20:42.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299640, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299640, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299640, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299640, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:20:44.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299640, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299640, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299640, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299640, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:20:47.741: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:20:47.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7490" for this suite.
STEP: Destroying namespace "webhook-7490-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.602 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":89,"skipped":1435,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:20:48.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:20:56.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3592" for this suite.

• [SLOW TEST:8.705 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1444,"failed":0}
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:20:56.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4642
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 17 22:20:57.266: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 17 22:21:29.524: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.28:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4642 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:21:29.524: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:21:29.577489       7 log.go:172] (0x4002eb2000) (0x400319d7c0) Create stream
I0817 22:21:29.577647       7 log.go:172] (0x4002eb2000) (0x400319d7c0) Stream added, broadcasting: 1
I0817 22:21:29.581174       7 log.go:172] (0x4002eb2000) Reply frame received for 1
I0817 22:21:29.581357       7 log.go:172] (0x4002eb2000) (0x400319d860) Create stream
I0817 22:21:29.581449       7 log.go:172] (0x4002eb2000) (0x400319d860) Stream added, broadcasting: 3
I0817 22:21:29.582733       7 log.go:172] (0x4002eb2000) Reply frame received for 3
I0817 22:21:29.582851       7 log.go:172] (0x4002eb2000) (0x400277a000) Create stream
I0817 22:21:29.582907       7 log.go:172] (0x4002eb2000) (0x400277a000) Stream added, broadcasting: 5
I0817 22:21:29.583849       7 log.go:172] (0x4002eb2000) Reply frame received for 5
I0817 22:21:29.631363       7 log.go:172] (0x4002eb2000) Data frame received for 3
I0817 22:21:29.631517       7 log.go:172] (0x400319d860) (3) Data frame handling
I0817 22:21:29.631634       7 log.go:172] (0x4002eb2000) Data frame received for 5
I0817 22:21:29.631788       7 log.go:172] (0x400277a000) (5) Data frame handling
I0817 22:21:29.631899       7 log.go:172] (0x400319d860) (3) Data frame sent
I0817 22:21:29.633319       7 log.go:172] (0x4002eb2000) Data frame received for 3
I0817 22:21:29.633426       7 log.go:172] (0x400319d860) (3) Data frame handling
I0817 22:21:29.634956       7 log.go:172] (0x4002eb2000) Data frame received for 1
I0817 22:21:29.635094       7 log.go:172] (0x400319d7c0) (1) Data frame handling
I0817 22:21:29.635261       7 log.go:172] (0x400319d7c0) (1) Data frame sent
I0817 22:21:29.635362       7 log.go:172] (0x4002eb2000) (0x400319d7c0) Stream removed, broadcasting: 1
I0817 22:21:29.635472       7 log.go:172] (0x4002eb2000) Go away received
I0817 22:21:29.635777       7 log.go:172] (0x4002eb2000) (0x400319d7c0) Stream removed, broadcasting: 1
I0817 22:21:29.635868       7 log.go:172] (0x4002eb2000) (0x400319d860) Stream removed, broadcasting: 3
I0817 22:21:29.635940       7 log.go:172] (0x4002eb2000) (0x400277a000) Stream removed, broadcasting: 5
Aug 17 22:21:29.636: INFO: Found all expected endpoints: [netserver-0]
Aug 17 22:21:29.652: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.81:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4642 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:21:29.652: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:21:29.706771       7 log.go:172] (0x4001fb8580) (0x40022f2b40) Create stream
I0817 22:21:29.706912       7 log.go:172] (0x4001fb8580) (0x40022f2b40) Stream added, broadcasting: 1
I0817 22:21:29.715774       7 log.go:172] (0x4001fb8580) Reply frame received for 1
I0817 22:21:29.715969       7 log.go:172] (0x4001fb8580) (0x400277a140) Create stream
I0817 22:21:29.716041       7 log.go:172] (0x4001fb8580) (0x400277a140) Stream added, broadcasting: 3
I0817 22:21:29.717495       7 log.go:172] (0x4001fb8580) Reply frame received for 3
I0817 22:21:29.717597       7 log.go:172] (0x4001fb8580) (0x40022f2be0) Create stream
I0817 22:21:29.717655       7 log.go:172] (0x4001fb8580) (0x40022f2be0) Stream added, broadcasting: 5
I0817 22:21:29.719043       7 log.go:172] (0x4001fb8580) Reply frame received for 5
I0817 22:21:29.777697       7 log.go:172] (0x4001fb8580) Data frame received for 3
I0817 22:21:29.777838       7 log.go:172] (0x400277a140) (3) Data frame handling
I0817 22:21:29.777945       7 log.go:172] (0x4001fb8580) Data frame received for 5
I0817 22:21:29.778093       7 log.go:172] (0x40022f2be0) (5) Data frame handling
I0817 22:21:29.778276       7 log.go:172] (0x400277a140) (3) Data frame sent
I0817 22:21:29.778378       7 log.go:172] (0x4001fb8580) Data frame received for 3
I0817 22:21:29.778483       7 log.go:172] (0x400277a140) (3) Data frame handling
I0817 22:21:29.779811       7 log.go:172] (0x4001fb8580) Data frame received for 1
I0817 22:21:29.779882       7 log.go:172] (0x40022f2b40) (1) Data frame handling
I0817 22:21:29.779954       7 log.go:172] (0x40022f2b40) (1) Data frame sent
I0817 22:21:29.780030       7 log.go:172] (0x4001fb8580) (0x40022f2b40) Stream removed, broadcasting: 1
I0817 22:21:29.780119       7 log.go:172] (0x4001fb8580) Go away received
I0817 22:21:29.780393       7 log.go:172] (0x4001fb8580) (0x40022f2b40) Stream removed, broadcasting: 1
I0817 22:21:29.780471       7 log.go:172] (0x4001fb8580) (0x400277a140) Stream removed, broadcasting: 3
I0817 22:21:29.780544       7 log.go:172] (0x4001fb8580) (0x40022f2be0) Stream removed, broadcasting: 5
Aug 17 22:21:29.780: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:21:29.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4642" for this suite.

• [SLOW TEST:32.880 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1446,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:21:29.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-ca7b4372-cd24-4d2a-95eb-dc9ada64a13f
STEP: Creating a pod to test consume configMaps
Aug 17 22:21:31.528: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c" in namespace "projected-564" to be "success or failure"
Aug 17 22:21:32.116: INFO: Pod "pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c": Phase="Pending", Reason="", readiness=false. Elapsed: 587.074323ms
Aug 17 22:21:34.121: INFO: Pod "pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.592505594s
Aug 17 22:21:36.429: INFO: Pod "pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.89999526s
Aug 17 22:21:38.623: INFO: Pod "pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c": Phase="Running", Reason="", readiness=true. Elapsed: 7.094202714s
Aug 17 22:21:40.628: INFO: Pod "pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.098924994s
STEP: Saw pod success
Aug 17 22:21:40.628: INFO: Pod "pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c" satisfied condition "success or failure"
Aug 17 22:21:40.632: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c container projected-configmap-volume-test: 
STEP: delete the pod
Aug 17 22:21:40.871: INFO: Waiting for pod pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c to disappear
Aug 17 22:21:40.943: INFO: Pod pod-projected-configmaps-c735300c-a0df-40ee-ac6f-f1206b81c00c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:21:40.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-564" for this suite.

• [SLOW TEST:11.163 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1456,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:21:40.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 17 22:21:41.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2827'
Aug 17 22:21:42.962: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 17 22:21:42.963: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Aug 17 22:21:45.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2827'
Aug 17 22:21:47.074: INFO: stderr: ""
Aug 17 22:21:47.074: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:21:47.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2827" for this suite.

• [SLOW TEST:6.128 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1625
    should create a deployment from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":93,"skipped":1486,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:21:47.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 17 22:21:47.351: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8428 /api/v1/namespaces/watch-8428/configmaps/e2e-watch-test-label-changed cd0f3f17-81f2-4bcd-a7b8-7099343f44b4 884382 0 2020-08-17 22:21:47 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 17 22:21:47.353: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8428 /api/v1/namespaces/watch-8428/configmaps/e2e-watch-test-label-changed cd0f3f17-81f2-4bcd-a7b8-7099343f44b4 884383 0 2020-08-17 22:21:47 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 17 22:21:47.353: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8428 /api/v1/namespaces/watch-8428/configmaps/e2e-watch-test-label-changed cd0f3f17-81f2-4bcd-a7b8-7099343f44b4 884385 0 2020-08-17 22:21:47 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 17 22:21:59.026: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8428 /api/v1/namespaces/watch-8428/configmaps/e2e-watch-test-label-changed cd0f3f17-81f2-4bcd-a7b8-7099343f44b4 884423 0 2020-08-17 22:21:47 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 17 22:21:59.027: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8428 /api/v1/namespaces/watch-8428/configmaps/e2e-watch-test-label-changed cd0f3f17-81f2-4bcd-a7b8-7099343f44b4 884426 0 2020-08-17 22:21:47 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 17 22:21:59.027: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8428 /api/v1/namespaces/watch-8428/configmaps/e2e-watch-test-label-changed cd0f3f17-81f2-4bcd-a7b8-7099343f44b4 884428 0 2020-08-17 22:21:47 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:21:59.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8428" for this suite.

• [SLOW TEST:12.469 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":94,"skipped":1508,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:21:59.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:22:18.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2327" for this suite.

• [SLOW TEST:18.533 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":95,"skipped":1521,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:22:18.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 17 22:22:18.156: INFO: PodSpec: initContainers in spec.initContainers
Aug 17 22:23:07.282: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c3f195ec-82a6-49a1-a93d-0db310015442", GenerateName:"", Namespace:"init-container-2033", SelfLink:"/api/v1/namespaces/init-container-2033/pods/pod-init-c3f195ec-82a6-49a1-a93d-0db310015442", UID:"04499f0b-bff0-4ef3-80c8-0a1601e86a3e", ResourceVersion:"884676", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733299738, loc:(*time.Location)(0x726af60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"154120627"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-k86ct", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x400221e140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-k86ct", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-k86ct", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-k86ct", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4004e32258), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4002cc8060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4004e322e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4004e32300)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4004e32308), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4004e3230c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299738, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299738, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299738, loc:(*time.Location)(0x726af60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299738, loc:(*time.Location)(0x726af60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", PodIP:"10.244.1.83", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.83"}}, StartTime:(*v1.Time)(0x4005976100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x4005976140), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x4001f6c150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://97193ca854bc90f4245569a144eeb873137f3dcf59d6047f9c3566ab1828f627", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4005976160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4005976120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0x4004e3239f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:23:07.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2033" for this suite.

• [SLOW TEST:49.237 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":96,"skipped":1531,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:23:07.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:23:10.991: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:23:13.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299790, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299790, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299791, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299790, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:23:15.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299790, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299790, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299791, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299790, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:23:18.104: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 17 22:23:24.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8485 to-be-attached-pod -i -c=container1'
Aug 17 22:23:25.598: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:23:25.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8485" for this suite.
STEP: Destroying namespace "webhook-8485-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.370 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":97,"skipped":1532,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:23:25.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:23:30.801: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:23:32.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299810, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299810, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299811, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299810, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:23:34.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299810, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299810, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299811, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733299810, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:23:37.939: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:23:38.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9837" for this suite.
STEP: Destroying namespace "webhook-9837-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.562 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":98,"skipped":1532,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:23:38.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-42e551a7-dc1b-413b-950f-281acfb68fb6
STEP: Creating a pod to test consume secrets
Aug 17 22:23:38.362: INFO: Waiting up to 5m0s for pod "pod-secrets-3753887a-5dc5-4d6c-8f21-4aea3e332d07" in namespace "secrets-4547" to be "success or failure"
Aug 17 22:23:38.383: INFO: Pod "pod-secrets-3753887a-5dc5-4d6c-8f21-4aea3e332d07": Phase="Pending", Reason="", readiness=false. Elapsed: 20.916963ms
Aug 17 22:23:40.390: INFO: Pod "pod-secrets-3753887a-5dc5-4d6c-8f21-4aea3e332d07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027812588s
Aug 17 22:23:42.441: INFO: Pod "pod-secrets-3753887a-5dc5-4d6c-8f21-4aea3e332d07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078552579s
Aug 17 22:23:44.449: INFO: Pod "pod-secrets-3753887a-5dc5-4d6c-8f21-4aea3e332d07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08654964s
STEP: Saw pod success
Aug 17 22:23:44.449: INFO: Pod "pod-secrets-3753887a-5dc5-4d6c-8f21-4aea3e332d07" satisfied condition "success or failure"
Aug 17 22:23:44.913: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-3753887a-5dc5-4d6c-8f21-4aea3e332d07 container secret-volume-test: 
STEP: delete the pod
Aug 17 22:23:45.069: INFO: Waiting for pod pod-secrets-3753887a-5dc5-4d6c-8f21-4aea3e332d07 to disappear
Aug 17 22:23:45.109: INFO: Pod pod-secrets-3753887a-5dc5-4d6c-8f21-4aea3e332d07 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:23:45.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4547" for this suite.

• [SLOW TEST:6.858 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1554,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:23:45.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 17 22:23:45.424: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:23:55.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2559" for this suite.

• [SLOW TEST:10.421 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":100,"skipped":1578,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:23:55.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 17 22:23:55.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-80'
Aug 17 22:23:57.058: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 17 22:23:57.058: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 17 22:23:57.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-80'
Aug 17 22:23:58.355: INFO: stderr: ""
Aug 17 22:23:58.355: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:23:58.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-80" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":101,"skipped":1585,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:23:58.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 17 22:24:06.317: INFO: Successfully updated pod "annotationupdatebd8e0ab2-cf79-4259-b526-8c6ebe8fe8b9"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:24:10.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4026" for this suite.

• [SLOW TEST:12.130 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1585,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:24:10.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-8164eb47-833b-4d11-8dfa-d2d9a22be43f
STEP: Creating a pod to test consume configMaps
Aug 17 22:24:10.779: INFO: Waiting up to 5m0s for pod "pod-configmaps-4477f840-125b-4cb1-b3fa-ebd0e34425a2" in namespace "configmap-6319" to be "success or failure"
Aug 17 22:24:10.806: INFO: Pod "pod-configmaps-4477f840-125b-4cb1-b3fa-ebd0e34425a2": Phase="Pending", Reason="", readiness=false. Elapsed: 25.926186ms
Aug 17 22:24:12.813: INFO: Pod "pod-configmaps-4477f840-125b-4cb1-b3fa-ebd0e34425a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033430097s
Aug 17 22:24:14.834: INFO: Pod "pod-configmaps-4477f840-125b-4cb1-b3fa-ebd0e34425a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054345159s
STEP: Saw pod success
Aug 17 22:24:14.834: INFO: Pod "pod-configmaps-4477f840-125b-4cb1-b3fa-ebd0e34425a2" satisfied condition "success or failure"
Aug 17 22:24:14.838: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4477f840-125b-4cb1-b3fa-ebd0e34425a2 container configmap-volume-test: 
STEP: delete the pod
Aug 17 22:24:14.862: INFO: Waiting for pod pod-configmaps-4477f840-125b-4cb1-b3fa-ebd0e34425a2 to disappear
Aug 17 22:24:14.867: INFO: Pod pod-configmaps-4477f840-125b-4cb1-b3fa-ebd0e34425a2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:24:14.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6319" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1588,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:24:14.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3283, will wait for the garbage collector to delete the pods
Aug 17 22:24:21.294: INFO: Deleting Job.batch foo took: 10.616228ms
Aug 17 22:24:21.594: INFO: Terminating Job.batch foo pods took: 300.660459ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:25:02.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3283" for this suite.

• [SLOW TEST:47.145 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":104,"skipped":1601,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:25:02.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Aug 17 22:25:02.641: INFO: created pod pod-service-account-defaultsa
Aug 17 22:25:02.641: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 17 22:25:02.681: INFO: created pod pod-service-account-mountsa
Aug 17 22:25:02.681: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 17 22:25:02.694: INFO: created pod pod-service-account-nomountsa
Aug 17 22:25:02.694: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 17 22:25:02.760: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 17 22:25:02.761: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 17 22:25:02.796: INFO: created pod pod-service-account-mountsa-mountspec
Aug 17 22:25:02.796: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 17 22:25:02.856: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 17 22:25:02.856: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 17 22:25:02.906: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 17 22:25:02.906: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 17 22:25:02.959: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 17 22:25:02.959: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 17 22:25:02.995: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 17 22:25:02.995: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:25:02.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7534" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":105,"skipped":1640,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:25:03.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Aug 17 22:25:03.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-150'
Aug 17 22:25:05.898: INFO: stderr: ""
Aug 17 22:25:05.898: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 17 22:25:05.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-150'
Aug 17 22:25:07.595: INFO: stderr: ""
Aug 17 22:25:07.595: INFO: stdout: "update-demo-nautilus-99r7c update-demo-nautilus-wws96 "
Aug 17 22:25:07.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-99r7c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:25:08.916: INFO: stderr: ""
Aug 17 22:25:08.916: INFO: stdout: ""
Aug 17 22:25:08.917: INFO: update-demo-nautilus-99r7c is created but not running
Aug 17 22:25:13.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-150'
Aug 17 22:25:15.516: INFO: stderr: ""
Aug 17 22:25:15.516: INFO: stdout: "update-demo-nautilus-99r7c update-demo-nautilus-wws96 "
Aug 17 22:25:15.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-99r7c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:25:17.014: INFO: stderr: ""
Aug 17 22:25:17.014: INFO: stdout: ""
Aug 17 22:25:17.014: INFO: update-demo-nautilus-99r7c is created but not running
Aug 17 22:25:22.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-150'
Aug 17 22:25:23.269: INFO: stderr: ""
Aug 17 22:25:23.269: INFO: stdout: "update-demo-nautilus-99r7c update-demo-nautilus-wws96 "
Aug 17 22:25:23.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-99r7c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:25:24.541: INFO: stderr: ""
Aug 17 22:25:24.541: INFO: stdout: "true"
Aug 17 22:25:24.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-99r7c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:25:25.807: INFO: stderr: ""
Aug 17 22:25:25.807: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 17 22:25:25.807: INFO: validating pod update-demo-nautilus-99r7c
Aug 17 22:25:25.814: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 17 22:25:25.815: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 17 22:25:25.815: INFO: update-demo-nautilus-99r7c is verified up and running
Aug 17 22:25:25.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wws96 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:25:27.062: INFO: stderr: ""
Aug 17 22:25:27.062: INFO: stdout: "true"
Aug 17 22:25:27.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wws96 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:25:28.290: INFO: stderr: ""
Aug 17 22:25:28.290: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 17 22:25:28.290: INFO: validating pod update-demo-nautilus-wws96
Aug 17 22:25:28.295: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 17 22:25:28.296: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 17 22:25:28.296: INFO: update-demo-nautilus-wws96 is verified up and running
STEP: rolling-update to new replication controller
Aug 17 22:25:28.305: INFO: scanned /root for discovery docs: 
Aug 17 22:25:28.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-150'
Aug 17 22:25:58.627: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 17 22:25:58.627: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 17 22:25:58.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-150'
Aug 17 22:25:59.914: INFO: stderr: ""
Aug 17 22:25:59.914: INFO: stdout: "update-demo-kitten-pnckb update-demo-kitten-qkh62 "
Aug 17 22:25:59.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pnckb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:26:01.184: INFO: stderr: ""
Aug 17 22:26:01.184: INFO: stdout: "true"
Aug 17 22:26:01.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pnckb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:26:02.441: INFO: stderr: ""
Aug 17 22:26:02.441: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 17 22:26:02.441: INFO: validating pod update-demo-kitten-pnckb
Aug 17 22:26:02.449: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 17 22:26:02.449: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 17 22:26:02.449: INFO: update-demo-kitten-pnckb is verified up and running
Aug 17 22:26:02.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qkh62 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:26:03.701: INFO: stderr: ""
Aug 17 22:26:03.701: INFO: stdout: "true"
Aug 17 22:26:03.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qkh62 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-150'
Aug 17 22:26:04.975: INFO: stderr: ""
Aug 17 22:26:04.975: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 17 22:26:04.975: INFO: validating pod update-demo-kitten-qkh62
Aug 17 22:26:04.981: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 17 22:26:04.981: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 17 22:26:04.981: INFO: update-demo-kitten-qkh62 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:26:04.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-150" for this suite.

• [SLOW TEST:61.878 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":106,"skipped":1663,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:26:04.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:26:05.398: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:26:06.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8591" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":107,"skipped":1670,"failed":0}

------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:26:06.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 17 22:26:18.865: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:18.866: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:18.931670       7 log.go:172] (0x4002eb2790) (0x40024301e0) Create stream
I0817 22:26:18.931842       7 log.go:172] (0x4002eb2790) (0x40024301e0) Stream added, broadcasting: 1
I0817 22:26:18.935489       7 log.go:172] (0x4002eb2790) Reply frame received for 1
I0817 22:26:18.935658       7 log.go:172] (0x4002eb2790) (0x400319d400) Create stream
I0817 22:26:18.935746       7 log.go:172] (0x4002eb2790) (0x400319d400) Stream added, broadcasting: 3
I0817 22:26:18.937282       7 log.go:172] (0x4002eb2790) Reply frame received for 3
I0817 22:26:18.937435       7 log.go:172] (0x4002eb2790) (0x4002430320) Create stream
I0817 22:26:18.937514       7 log.go:172] (0x4002eb2790) (0x4002430320) Stream added, broadcasting: 5
I0817 22:26:18.939018       7 log.go:172] (0x4002eb2790) Reply frame received for 5
I0817 22:26:19.005539       7 log.go:172] (0x4002eb2790) Data frame received for 5
I0817 22:26:19.005691       7 log.go:172] (0x4002430320) (5) Data frame handling
I0817 22:26:19.005806       7 log.go:172] (0x4002eb2790) Data frame received for 3
I0817 22:26:19.005897       7 log.go:172] (0x400319d400) (3) Data frame handling
I0817 22:26:19.005976       7 log.go:172] (0x400319d400) (3) Data frame sent
I0817 22:26:19.006035       7 log.go:172] (0x4002eb2790) Data frame received for 3
I0817 22:26:19.006091       7 log.go:172] (0x400319d400) (3) Data frame handling
I0817 22:26:19.007394       7 log.go:172] (0x4002eb2790) Data frame received for 1
I0817 22:26:19.007489       7 log.go:172] (0x40024301e0) (1) Data frame handling
I0817 22:26:19.007580       7 log.go:172] (0x40024301e0) (1) Data frame sent
I0817 22:26:19.007677       7 log.go:172] (0x4002eb2790) (0x40024301e0) Stream removed, broadcasting: 1
I0817 22:26:19.007787       7 log.go:172] (0x4002eb2790) Go away received
I0817 22:26:19.008281       7 log.go:172] (0x4002eb2790) (0x40024301e0) Stream removed, broadcasting: 1
I0817 22:26:19.008414       7 log.go:172] (0x4002eb2790) (0x400319d400) Stream removed, broadcasting: 3
I0817 22:26:19.008538       7 log.go:172] (0x4002eb2790) (0x4002430320) Stream removed, broadcasting: 5
Aug 17 22:26:19.008: INFO: Exec stderr: ""
Aug 17 22:26:19.009: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:19.009: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:19.069849       7 log.go:172] (0x4003264bb0) (0x400277ab40) Create stream
I0817 22:26:19.070130       7 log.go:172] (0x4003264bb0) (0x400277ab40) Stream added, broadcasting: 1
I0817 22:26:19.073577       7 log.go:172] (0x4003264bb0) Reply frame received for 1
I0817 22:26:19.073744       7 log.go:172] (0x4003264bb0) (0x40028ac0a0) Create stream
I0817 22:26:19.073840       7 log.go:172] (0x4003264bb0) (0x40028ac0a0) Stream added, broadcasting: 3
I0817 22:26:19.075414       7 log.go:172] (0x4003264bb0) Reply frame received for 3
I0817 22:26:19.075552       7 log.go:172] (0x4003264bb0) (0x4002430460) Create stream
I0817 22:26:19.075632       7 log.go:172] (0x4003264bb0) (0x4002430460) Stream added, broadcasting: 5
I0817 22:26:19.077203       7 log.go:172] (0x4003264bb0) Reply frame received for 5
I0817 22:26:19.132091       7 log.go:172] (0x4003264bb0) Data frame received for 5
I0817 22:26:19.132284       7 log.go:172] (0x4002430460) (5) Data frame handling
I0817 22:26:19.132416       7 log.go:172] (0x4003264bb0) Data frame received for 3
I0817 22:26:19.132558       7 log.go:172] (0x40028ac0a0) (3) Data frame handling
I0817 22:26:19.132839       7 log.go:172] (0x40028ac0a0) (3) Data frame sent
I0817 22:26:19.132990       7 log.go:172] (0x4003264bb0) Data frame received for 3
I0817 22:26:19.133112       7 log.go:172] (0x40028ac0a0) (3) Data frame handling
I0817 22:26:19.133917       7 log.go:172] (0x4003264bb0) Data frame received for 1
I0817 22:26:19.134037       7 log.go:172] (0x400277ab40) (1) Data frame handling
I0817 22:26:19.134153       7 log.go:172] (0x400277ab40) (1) Data frame sent
I0817 22:26:19.134278       7 log.go:172] (0x4003264bb0) (0x400277ab40) Stream removed, broadcasting: 1
I0817 22:26:19.134449       7 log.go:172] (0x4003264bb0) Go away received
I0817 22:26:19.135481       7 log.go:172] (0x4003264bb0) (0x400277ab40) Stream removed, broadcasting: 1
I0817 22:26:19.135607       7 log.go:172] (0x4003264bb0) (0x40028ac0a0) Stream removed, broadcasting: 3
I0817 22:26:19.135722       7 log.go:172] (0x4003264bb0) (0x4002430460) Stream removed, broadcasting: 5
Aug 17 22:26:19.135: INFO: Exec stderr: ""
Aug 17 22:26:19.136: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:19.136: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:19.196536       7 log.go:172] (0x4002f7a420) (0x40028ac320) Create stream
I0817 22:26:19.196850       7 log.go:172] (0x4002f7a420) (0x40028ac320) Stream added, broadcasting: 1
I0817 22:26:19.201604       7 log.go:172] (0x4002f7a420) Reply frame received for 1
I0817 22:26:19.201802       7 log.go:172] (0x4002f7a420) (0x40028ac460) Create stream
I0817 22:26:19.201910       7 log.go:172] (0x4002f7a420) (0x40028ac460) Stream added, broadcasting: 3
I0817 22:26:19.204368       7 log.go:172] (0x4002f7a420) Reply frame received for 3
I0817 22:26:19.204556       7 log.go:172] (0x4002f7a420) (0x400277abe0) Create stream
I0817 22:26:19.204639       7 log.go:172] (0x4002f7a420) (0x400277abe0) Stream added, broadcasting: 5
I0817 22:26:19.206164       7 log.go:172] (0x4002f7a420) Reply frame received for 5
I0817 22:26:19.254626       7 log.go:172] (0x4002f7a420) Data frame received for 5
I0817 22:26:19.254796       7 log.go:172] (0x400277abe0) (5) Data frame handling
I0817 22:26:19.254992       7 log.go:172] (0x4002f7a420) Data frame received for 3
I0817 22:26:19.255162       7 log.go:172] (0x40028ac460) (3) Data frame handling
I0817 22:26:19.255325       7 log.go:172] (0x40028ac460) (3) Data frame sent
I0817 22:26:19.255481       7 log.go:172] (0x4002f7a420) Data frame received for 3
I0817 22:26:19.255605       7 log.go:172] (0x40028ac460) (3) Data frame handling
I0817 22:26:19.256055       7 log.go:172] (0x4002f7a420) Data frame received for 1
I0817 22:26:19.256195       7 log.go:172] (0x40028ac320) (1) Data frame handling
I0817 22:26:19.256354       7 log.go:172] (0x40028ac320) (1) Data frame sent
I0817 22:26:19.256513       7 log.go:172] (0x4002f7a420) (0x40028ac320) Stream removed, broadcasting: 1
I0817 22:26:19.256704       7 log.go:172] (0x4002f7a420) Go away received
I0817 22:26:19.257079       7 log.go:172] (0x4002f7a420) (0x40028ac320) Stream removed, broadcasting: 1
I0817 22:26:19.257251       7 log.go:172] (0x4002f7a420) (0x40028ac460) Stream removed, broadcasting: 3
I0817 22:26:19.257451       7 log.go:172] (0x4002f7a420) (0x400277abe0) Stream removed, broadcasting: 5
Aug 17 22:26:19.257: INFO: Exec stderr: ""
Aug 17 22:26:19.257: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:19.257: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:19.311518       7 log.go:172] (0x4002eb2dc0) (0x40024306e0) Create stream
I0817 22:26:19.311678       7 log.go:172] (0x4002eb2dc0) (0x40024306e0) Stream added, broadcasting: 1
I0817 22:26:19.316330       7 log.go:172] (0x4002eb2dc0) Reply frame received for 1
I0817 22:26:19.316471       7 log.go:172] (0x4002eb2dc0) (0x400277ac80) Create stream
I0817 22:26:19.316552       7 log.go:172] (0x4002eb2dc0) (0x400277ac80) Stream added, broadcasting: 3
I0817 22:26:19.318519       7 log.go:172] (0x4002eb2dc0) Reply frame received for 3
I0817 22:26:19.318752       7 log.go:172] (0x4002eb2dc0) (0x4002430780) Create stream
I0817 22:26:19.318910       7 log.go:172] (0x4002eb2dc0) (0x4002430780) Stream added, broadcasting: 5
I0817 22:26:19.323288       7 log.go:172] (0x4002eb2dc0) Reply frame received for 5
I0817 22:26:19.384627       7 log.go:172] (0x4002eb2dc0) Data frame received for 3
I0817 22:26:19.386078       7 log.go:172] (0x400277ac80) (3) Data frame handling
I0817 22:26:19.386224       7 log.go:172] (0x4002eb2dc0) Data frame received for 5
I0817 22:26:19.386548       7 log.go:172] (0x4002430780) (5) Data frame handling
I0817 22:26:19.386732       7 log.go:172] (0x4002eb2dc0) Data frame received for 1
I0817 22:26:19.386892       7 log.go:172] (0x40024306e0) (1) Data frame handling
I0817 22:26:19.387143       7 log.go:172] (0x40024306e0) (1) Data frame sent
I0817 22:26:19.387298       7 log.go:172] (0x4002eb2dc0) (0x40024306e0) Stream removed, broadcasting: 1
I0817 22:26:19.387467       7 log.go:172] (0x400277ac80) (3) Data frame sent
I0817 22:26:19.387614       7 log.go:172] (0x4002eb2dc0) Data frame received for 3
I0817 22:26:19.387745       7 log.go:172] (0x400277ac80) (3) Data frame handling
I0817 22:26:19.387919       7 log.go:172] (0x4002eb2dc0) Go away received
I0817 22:26:19.388043       7 log.go:172] (0x4002eb2dc0) (0x40024306e0) Stream removed, broadcasting: 1
I0817 22:26:19.388206       7 log.go:172] (0x4002eb2dc0) (0x400277ac80) Stream removed, broadcasting: 3
I0817 22:26:19.388370       7 log.go:172] (0x4002eb2dc0) (0x4002430780) Stream removed, broadcasting: 5
Aug 17 22:26:19.388: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 17 22:26:19.388: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:19.389: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:19.445414       7 log.go:172] (0x4002f16370) (0x4002f0c820) Create stream
I0817 22:26:19.445619       7 log.go:172] (0x4002f16370) (0x4002f0c820) Stream added, broadcasting: 1
I0817 22:26:19.449329       7 log.go:172] (0x4002f16370) Reply frame received for 1
I0817 22:26:19.449484       7 log.go:172] (0x4002f16370) (0x4002430820) Create stream
I0817 22:26:19.449565       7 log.go:172] (0x4002f16370) (0x4002430820) Stream added, broadcasting: 3
I0817 22:26:19.451040       7 log.go:172] (0x4002f16370) Reply frame received for 3
I0817 22:26:19.451199       7 log.go:172] (0x4002f16370) (0x400319d4a0) Create stream
I0817 22:26:19.451305       7 log.go:172] (0x4002f16370) (0x400319d4a0) Stream added, broadcasting: 5
I0817 22:26:19.452799       7 log.go:172] (0x4002f16370) Reply frame received for 5
I0817 22:26:19.520228       7 log.go:172] (0x4002f16370) Data frame received for 3
I0817 22:26:19.520354       7 log.go:172] (0x4002430820) (3) Data frame handling
I0817 22:26:19.520483       7 log.go:172] (0x4002f16370) Data frame received for 5
I0817 22:26:19.520608       7 log.go:172] (0x400319d4a0) (5) Data frame handling
I0817 22:26:19.520797       7 log.go:172] (0x4002430820) (3) Data frame sent
I0817 22:26:19.520913       7 log.go:172] (0x4002f16370) Data frame received for 3
I0817 22:26:19.521045       7 log.go:172] (0x4002430820) (3) Data frame handling
I0817 22:26:19.521940       7 log.go:172] (0x4002f16370) Data frame received for 1
I0817 22:26:19.522020       7 log.go:172] (0x4002f0c820) (1) Data frame handling
I0817 22:26:19.522130       7 log.go:172] (0x4002f0c820) (1) Data frame sent
I0817 22:26:19.522265       7 log.go:172] (0x4002f16370) (0x4002f0c820) Stream removed, broadcasting: 1
I0817 22:26:19.522407       7 log.go:172] (0x4002f16370) Go away received
I0817 22:26:19.522776       7 log.go:172] (0x4002f16370) (0x4002f0c820) Stream removed, broadcasting: 1
I0817 22:26:19.522854       7 log.go:172] (0x4002f16370) (0x4002430820) Stream removed, broadcasting: 3
I0817 22:26:19.522930       7 log.go:172] (0x4002f16370) (0x400319d4a0) Stream removed, broadcasting: 5
Aug 17 22:26:19.522: INFO: Exec stderr: ""
Aug 17 22:26:19.523: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:19.523: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:19.578237       7 log.go:172] (0x4002e58bb0) (0x40024892c0) Create stream
I0817 22:26:19.578361       7 log.go:172] (0x4002e58bb0) (0x40024892c0) Stream added, broadcasting: 1
I0817 22:26:19.582217       7 log.go:172] (0x4002e58bb0) Reply frame received for 1
I0817 22:26:19.582453       7 log.go:172] (0x4002e58bb0) (0x400277ae60) Create stream
I0817 22:26:19.582567       7 log.go:172] (0x4002e58bb0) (0x400277ae60) Stream added, broadcasting: 3
I0817 22:26:19.584642       7 log.go:172] (0x4002e58bb0) Reply frame received for 3
I0817 22:26:19.584848       7 log.go:172] (0x4002e58bb0) (0x4002489360) Create stream
I0817 22:26:19.584922       7 log.go:172] (0x4002e58bb0) (0x4002489360) Stream added, broadcasting: 5
I0817 22:26:19.586365       7 log.go:172] (0x4002e58bb0) Reply frame received for 5
I0817 22:26:19.651946       7 log.go:172] (0x4002e58bb0) Data frame received for 5
I0817 22:26:19.652178       7 log.go:172] (0x4002489360) (5) Data frame handling
I0817 22:26:19.652343       7 log.go:172] (0x4002e58bb0) Data frame received for 3
I0817 22:26:19.652542       7 log.go:172] (0x400277ae60) (3) Data frame handling
I0817 22:26:19.652857       7 log.go:172] (0x400277ae60) (3) Data frame sent
I0817 22:26:19.653076       7 log.go:172] (0x4002e58bb0) Data frame received for 3
I0817 22:26:19.653269       7 log.go:172] (0x400277ae60) (3) Data frame handling
I0817 22:26:19.653415       7 log.go:172] (0x4002e58bb0) Data frame received for 1
I0817 22:26:19.653558       7 log.go:172] (0x40024892c0) (1) Data frame handling
I0817 22:26:19.653693       7 log.go:172] (0x40024892c0) (1) Data frame sent
I0817 22:26:19.653827       7 log.go:172] (0x4002e58bb0) (0x40024892c0) Stream removed, broadcasting: 1
I0817 22:26:19.653959       7 log.go:172] (0x4002e58bb0) Go away received
I0817 22:26:19.654441       7 log.go:172] (0x4002e58bb0) (0x40024892c0) Stream removed, broadcasting: 1
I0817 22:26:19.654630       7 log.go:172] (0x4002e58bb0) (0x400277ae60) Stream removed, broadcasting: 3
I0817 22:26:19.654812       7 log.go:172] (0x4002e58bb0) (0x4002489360) Stream removed, broadcasting: 5
Aug 17 22:26:19.654: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 17 22:26:19.655: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:19.655: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:19.714381       7 log.go:172] (0x4002f7aa50) (0x40028ac640) Create stream
I0817 22:26:19.714559       7 log.go:172] (0x4002f7aa50) (0x40028ac640) Stream added, broadcasting: 1
I0817 22:26:19.718755       7 log.go:172] (0x4002f7aa50) Reply frame received for 1
I0817 22:26:19.718981       7 log.go:172] (0x4002f7aa50) (0x400277af00) Create stream
I0817 22:26:19.719078       7 log.go:172] (0x4002f7aa50) (0x400277af00) Stream added, broadcasting: 3
I0817 22:26:19.721328       7 log.go:172] (0x4002f7aa50) Reply frame received for 3
I0817 22:26:19.721496       7 log.go:172] (0x4002f7aa50) (0x400277b040) Create stream
I0817 22:26:19.721588       7 log.go:172] (0x4002f7aa50) (0x400277b040) Stream added, broadcasting: 5
I0817 22:26:19.723184       7 log.go:172] (0x4002f7aa50) Reply frame received for 5
I0817 22:26:19.790646       7 log.go:172] (0x4002f7aa50) Data frame received for 5
I0817 22:26:19.790784       7 log.go:172] (0x400277b040) (5) Data frame handling
I0817 22:26:19.790935       7 log.go:172] (0x4002f7aa50) Data frame received for 3
I0817 22:26:19.791079       7 log.go:172] (0x400277af00) (3) Data frame handling
I0817 22:26:19.791227       7 log.go:172] (0x400277af00) (3) Data frame sent
I0817 22:26:19.791357       7 log.go:172] (0x4002f7aa50) Data frame received for 3
I0817 22:26:19.791463       7 log.go:172] (0x400277af00) (3) Data frame handling
I0817 22:26:19.792182       7 log.go:172] (0x4002f7aa50) Data frame received for 1
I0817 22:26:19.792349       7 log.go:172] (0x40028ac640) (1) Data frame handling
I0817 22:26:19.792483       7 log.go:172] (0x40028ac640) (1) Data frame sent
I0817 22:26:19.792618       7 log.go:172] (0x4002f7aa50) (0x40028ac640) Stream removed, broadcasting: 1
I0817 22:26:19.792945       7 log.go:172] (0x4002f7aa50) Go away received
I0817 22:26:19.793154       7 log.go:172] (0x4002f7aa50) (0x40028ac640) Stream removed, broadcasting: 1
I0817 22:26:19.793332       7 log.go:172] (0x4002f7aa50) (0x400277af00) Stream removed, broadcasting: 3
I0817 22:26:19.793484       7 log.go:172] (0x4002f7aa50) (0x400277b040) Stream removed, broadcasting: 5
Aug 17 22:26:19.793: INFO: Exec stderr: ""
Aug 17 22:26:19.793: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:19.793: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:19.852433       7 log.go:172] (0x4002f16b00) (0x4002f0ca00) Create stream
I0817 22:26:19.852632       7 log.go:172] (0x4002f16b00) (0x4002f0ca00) Stream added, broadcasting: 1
I0817 22:26:19.857298       7 log.go:172] (0x4002f16b00) Reply frame received for 1
I0817 22:26:19.857557       7 log.go:172] (0x4002f16b00) (0x40028ac780) Create stream
I0817 22:26:19.857694       7 log.go:172] (0x4002f16b00) (0x40028ac780) Stream added, broadcasting: 3
I0817 22:26:19.859300       7 log.go:172] (0x4002f16b00) Reply frame received for 3
I0817 22:26:19.859495       7 log.go:172] (0x4002f16b00) (0x400319d540) Create stream
I0817 22:26:19.859601       7 log.go:172] (0x4002f16b00) (0x400319d540) Stream added, broadcasting: 5
I0817 22:26:19.861237       7 log.go:172] (0x4002f16b00) Reply frame received for 5
I0817 22:26:19.936343       7 log.go:172] (0x4002f16b00) Data frame received for 5
I0817 22:26:19.936549       7 log.go:172] (0x400319d540) (5) Data frame handling
I0817 22:26:19.936706       7 log.go:172] (0x4002f16b00) Data frame received for 3
I0817 22:26:19.936863       7 log.go:172] (0x40028ac780) (3) Data frame handling
I0817 22:26:19.936971       7 log.go:172] (0x40028ac780) (3) Data frame sent
I0817 22:26:19.937051       7 log.go:172] (0x4002f16b00) Data frame received for 3
I0817 22:26:19.937135       7 log.go:172] (0x40028ac780) (3) Data frame handling
I0817 22:26:19.937884       7 log.go:172] (0x4002f16b00) Data frame received for 1
I0817 22:26:19.937962       7 log.go:172] (0x4002f0ca00) (1) Data frame handling
I0817 22:26:19.938036       7 log.go:172] (0x4002f0ca00) (1) Data frame sent
I0817 22:26:19.938131       7 log.go:172] (0x4002f16b00) (0x4002f0ca00) Stream removed, broadcasting: 1
I0817 22:26:19.938241       7 log.go:172] (0x4002f16b00) Go away received
I0817 22:26:19.938816       7 log.go:172] (0x4002f16b00) (0x4002f0ca00) Stream removed, broadcasting: 1
I0817 22:26:19.939046       7 log.go:172] (0x4002f16b00) (0x40028ac780) Stream removed, broadcasting: 3
I0817 22:26:19.939177       7 log.go:172] (0x4002f16b00) (0x400319d540) Stream removed, broadcasting: 5
Aug 17 22:26:19.939: INFO: Exec stderr: ""
Aug 17 22:26:19.939: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:19.939: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:19.990655       7 log.go:172] (0x4002f17130) (0x4002f0cd20) Create stream
I0817 22:26:19.990900       7 log.go:172] (0x4002f17130) (0x4002f0cd20) Stream added, broadcasting: 1
I0817 22:26:19.995194       7 log.go:172] (0x4002f17130) Reply frame received for 1
I0817 22:26:19.995474       7 log.go:172] (0x4002f17130) (0x4002489400) Create stream
I0817 22:26:19.995585       7 log.go:172] (0x4002f17130) (0x4002489400) Stream added, broadcasting: 3
I0817 22:26:19.997254       7 log.go:172] (0x4002f17130) Reply frame received for 3
I0817 22:26:19.997399       7 log.go:172] (0x4002f17130) (0x400319d5e0) Create stream
I0817 22:26:19.997474       7 log.go:172] (0x4002f17130) (0x400319d5e0) Stream added, broadcasting: 5
I0817 22:26:19.998831       7 log.go:172] (0x4002f17130) Reply frame received for 5
I0817 22:26:20.058805       7 log.go:172] (0x4002f17130) Data frame received for 5
I0817 22:26:20.059031       7 log.go:172] (0x400319d5e0) (5) Data frame handling
I0817 22:26:20.059281       7 log.go:172] (0x4002f17130) Data frame received for 3
I0817 22:26:20.059463       7 log.go:172] (0x4002489400) (3) Data frame handling
I0817 22:26:20.059613       7 log.go:172] (0x4002489400) (3) Data frame sent
I0817 22:26:20.059765       7 log.go:172] (0x4002f17130) Data frame received for 3
I0817 22:26:20.059893       7 log.go:172] (0x4002489400) (3) Data frame handling
I0817 22:26:20.059978       7 log.go:172] (0x4002f17130) Data frame received for 1
I0817 22:26:20.060054       7 log.go:172] (0x4002f0cd20) (1) Data frame handling
I0817 22:26:20.060158       7 log.go:172] (0x4002f0cd20) (1) Data frame sent
I0817 22:26:20.060237       7 log.go:172] (0x4002f17130) (0x4002f0cd20) Stream removed, broadcasting: 1
I0817 22:26:20.060337       7 log.go:172] (0x4002f17130) Go away received
I0817 22:26:20.060902       7 log.go:172] (0x4002f17130) (0x4002f0cd20) Stream removed, broadcasting: 1
I0817 22:26:20.061057       7 log.go:172] (0x4002f17130) (0x4002489400) Stream removed, broadcasting: 3
I0817 22:26:20.061188       7 log.go:172] (0x4002f17130) (0x400319d5e0) Stream removed, broadcasting: 5
Aug 17 22:26:20.061: INFO: Exec stderr: ""
Aug 17 22:26:20.061: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4079 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:26:20.061: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:26:20.129244       7 log.go:172] (0x4002f178c0) (0x4002f0cf00) Create stream
I0817 22:26:20.129480       7 log.go:172] (0x4002f178c0) (0x4002f0cf00) Stream added, broadcasting: 1
I0817 22:26:20.134293       7 log.go:172] (0x4002f178c0) Reply frame received for 1
I0817 22:26:20.134481       7 log.go:172] (0x4002f178c0) (0x4002f0d040) Create stream
I0817 22:26:20.134555       7 log.go:172] (0x4002f178c0) (0x4002f0d040) Stream added, broadcasting: 3
I0817 22:26:20.136313       7 log.go:172] (0x4002f178c0) Reply frame received for 3
I0817 22:26:20.136510       7 log.go:172] (0x4002f178c0) (0x4002f0d0e0) Create stream
I0817 22:26:20.136616       7 log.go:172] (0x4002f178c0) (0x4002f0d0e0) Stream added, broadcasting: 5
I0817 22:26:20.138085       7 log.go:172] (0x4002f178c0) Reply frame received for 5
I0817 22:26:20.189814       7 log.go:172] (0x4002f178c0) Data frame received for 3
I0817 22:26:20.189958       7 log.go:172] (0x4002f0d040) (3) Data frame handling
I0817 22:26:20.190042       7 log.go:172] (0x4002f0d040) (3) Data frame sent
I0817 22:26:20.190129       7 log.go:172] (0x4002f178c0) Data frame received for 3
I0817 22:26:20.190244       7 log.go:172] (0x4002f178c0) Data frame received for 5
I0817 22:26:20.190391       7 log.go:172] (0x4002f0d0e0) (5) Data frame handling
I0817 22:26:20.190642       7 log.go:172] (0x4002f0d040) (3) Data frame handling
I0817 22:26:20.191505       7 log.go:172] (0x4002f178c0) Data frame received for 1
I0817 22:26:20.191673       7 log.go:172] (0x4002f0cf00) (1) Data frame handling
I0817 22:26:20.191834       7 log.go:172] (0x4002f0cf00) (1) Data frame sent
I0817 22:26:20.192009       7 log.go:172] (0x4002f178c0) (0x4002f0cf00) Stream removed, broadcasting: 1
I0817 22:26:20.192233       7 log.go:172] (0x4002f178c0) Go away received
I0817 22:26:20.193696       7 log.go:172] (0x4002f178c0) (0x4002f0cf00) Stream removed, broadcasting: 1
I0817 22:26:20.193893       7 log.go:172] (0x4002f178c0) (0x4002f0d040) Stream removed, broadcasting: 3
I0817 22:26:20.194086       7 log.go:172] (0x4002f178c0) (0x4002f0d0e0) Stream removed, broadcasting: 5
Aug 17 22:26:20.194: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:26:20.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4079" for this suite.

• [SLOW TEST:13.967 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1670,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:26:20.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-bf6f0b35-8ff2-43b6-b098-e9c4ecee9c95
STEP: Creating a pod to test consume configMaps
Aug 17 22:26:20.333: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed6aa478-d949-48e0-bf23-e9208dfcd74f" in namespace "configmap-6770" to be "success or failure"
Aug 17 22:26:20.343: INFO: Pod "pod-configmaps-ed6aa478-d949-48e0-bf23-e9208dfcd74f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.163617ms
Aug 17 22:26:22.374: INFO: Pod "pod-configmaps-ed6aa478-d949-48e0-bf23-e9208dfcd74f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040661905s
Aug 17 22:26:24.392: INFO: Pod "pod-configmaps-ed6aa478-d949-48e0-bf23-e9208dfcd74f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058548965s
STEP: Saw pod success
Aug 17 22:26:24.393: INFO: Pod "pod-configmaps-ed6aa478-d949-48e0-bf23-e9208dfcd74f" satisfied condition "success or failure"
Aug 17 22:26:24.397: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ed6aa478-d949-48e0-bf23-e9208dfcd74f container configmap-volume-test: 
STEP: delete the pod
Aug 17 22:26:24.434: INFO: Waiting for pod pod-configmaps-ed6aa478-d949-48e0-bf23-e9208dfcd74f to disappear
Aug 17 22:26:24.439: INFO: Pod pod-configmaps-ed6aa478-d949-48e0-bf23-e9208dfcd74f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:26:24.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6770" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1673,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:26:24.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-b5fc1cfc-4edf-42f8-8d9e-9ea783fa02ef
STEP: Creating a pod to test consume configMaps
Aug 17 22:26:24.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb253f13-c613-4ad3-8d65-d85506117cb4" in namespace "configmap-9531" to be "success or failure"
Aug 17 22:26:24.575: INFO: Pod "pod-configmaps-fb253f13-c613-4ad3-8d65-d85506117cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.719232ms
Aug 17 22:26:26.693: INFO: Pod "pod-configmaps-fb253f13-c613-4ad3-8d65-d85506117cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138963587s
Aug 17 22:26:28.716: INFO: Pod "pod-configmaps-fb253f13-c613-4ad3-8d65-d85506117cb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161203284s
STEP: Saw pod success
Aug 17 22:26:28.716: INFO: Pod "pod-configmaps-fb253f13-c613-4ad3-8d65-d85506117cb4" satisfied condition "success or failure"
Aug 17 22:26:28.721: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-fb253f13-c613-4ad3-8d65-d85506117cb4 container configmap-volume-test: 
STEP: delete the pod
Aug 17 22:26:28.743: INFO: Waiting for pod pod-configmaps-fb253f13-c613-4ad3-8d65-d85506117cb4 to disappear
Aug 17 22:26:28.747: INFO: Pod pod-configmaps-fb253f13-c613-4ad3-8d65-d85506117cb4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:26:28.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9531" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1681,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:26:28.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-509492ac-ac0d-4f76-9e35-1d4f8abb0ba4
Aug 17 22:26:29.103: INFO: Pod name my-hostname-basic-509492ac-ac0d-4f76-9e35-1d4f8abb0ba4: Found 0 pods out of 1
Aug 17 22:26:34.115: INFO: Pod name my-hostname-basic-509492ac-ac0d-4f76-9e35-1d4f8abb0ba4: Found 1 pods out of 1
Aug 17 22:26:34.115: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-509492ac-ac0d-4f76-9e35-1d4f8abb0ba4" are running
Aug 17 22:26:34.121: INFO: Pod "my-hostname-basic-509492ac-ac0d-4f76-9e35-1d4f8abb0ba4-nlflk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 22:26:29 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 22:26:32 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 22:26:32 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 22:26:29 +0000 UTC Reason: Message:}])
Aug 17 22:26:34.122: INFO: Trying to dial the pod
Aug 17 22:26:39.150: INFO: Controller my-hostname-basic-509492ac-ac0d-4f76-9e35-1d4f8abb0ba4: Got expected result from replica 1 [my-hostname-basic-509492ac-ac0d-4f76-9e35-1d4f8abb0ba4-nlflk]: "my-hostname-basic-509492ac-ac0d-4f76-9e35-1d4f8abb0ba4-nlflk", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:26:39.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-550" for this suite.

• [SLOW TEST:10.405 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":111,"skipped":1702,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:26:39.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-3807/configmap-test-79d819b0-e676-470a-bf2a-3cd4ac7a45fe
STEP: Creating a pod to test consume configMaps
Aug 17 22:26:39.317: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9d34e6a-4eff-40d7-841b-8e3d311eec7c" in namespace "configmap-3807" to be "success or failure"
Aug 17 22:26:39.332: INFO: Pod "pod-configmaps-e9d34e6a-4eff-40d7-841b-8e3d311eec7c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.016915ms
Aug 17 22:26:41.423: INFO: Pod "pod-configmaps-e9d34e6a-4eff-40d7-841b-8e3d311eec7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105341313s
Aug 17 22:26:43.430: INFO: Pod "pod-configmaps-e9d34e6a-4eff-40d7-841b-8e3d311eec7c": Phase="Running", Reason="", readiness=true. Elapsed: 4.112439764s
Aug 17 22:26:45.436: INFO: Pod "pod-configmaps-e9d34e6a-4eff-40d7-841b-8e3d311eec7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118242529s
STEP: Saw pod success
Aug 17 22:26:45.436: INFO: Pod "pod-configmaps-e9d34e6a-4eff-40d7-841b-8e3d311eec7c" satisfied condition "success or failure"
Aug 17 22:26:45.441: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e9d34e6a-4eff-40d7-841b-8e3d311eec7c container env-test: 
STEP: delete the pod
Aug 17 22:26:45.502: INFO: Waiting for pod pod-configmaps-e9d34e6a-4eff-40d7-841b-8e3d311eec7c to disappear
Aug 17 22:26:45.505: INFO: Pod pod-configmaps-e9d34e6a-4eff-40d7-841b-8e3d311eec7c no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:26:45.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3807" for this suite.

• [SLOW TEST:6.351 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1714,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:26:45.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 17 22:26:54.232: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 17 22:26:54.255: INFO: Pod pod-with-poststart-http-hook still exists
Aug 17 22:26:56.256: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 17 22:26:56.333: INFO: Pod pod-with-poststart-http-hook still exists
Aug 17 22:26:58.256: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 17 22:26:58.262: INFO: Pod pod-with-poststart-http-hook still exists
Aug 17 22:27:00.256: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 17 22:27:00.275: INFO: Pod pod-with-poststart-http-hook still exists
Aug 17 22:27:02.256: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 17 22:27:02.265: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:27:02.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1149" for this suite.

• [SLOW TEST:16.761 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1726,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:27:02.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 17 22:27:02.350: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 17 22:27:02.404: INFO: Waiting for terminating namespaces to be deleted...
Aug 17 22:27:02.408: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 17 22:27:02.421: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-4079 started at 2020-08-17 22:26:12 +0000 UTC (2 container statuses recorded)
Aug 17 22:27:02.421: INFO: 	Container busybox-1 ready: false, restart count 0
Aug 17 22:27:02.421: INFO: 	Container busybox-2 ready: false, restart count 0
Aug 17 22:27:02.421: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 22:27:02.421: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 17 22:27:02.421: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 22:27:02.421: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 17 22:27:02.421: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 17 22:27:02.433: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 22:27:02.434: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 17 22:27:02.434: INFO: test-pod from e2e-kubelet-etc-hosts-4079 started at 2020-08-17 22:26:06 +0000 UTC (3 container statuses recorded)
Aug 17 22:27:02.434: INFO: 	Container busybox-1 ready: false, restart count 0
Aug 17 22:27:02.434: INFO: 	Container busybox-2 ready: false, restart count 0
Aug 17 22:27:02.434: INFO: 	Container busybox-3 ready: false, restart count 0
Aug 17 22:27:02.434: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 22:27:02.434: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 17 22:27:02.434: INFO: pod-handle-http-request from container-lifecycle-hook-1149 started at 2020-08-17 22:26:46 +0000 UTC (1 container statuses recorded)
Aug 17 22:27:02.434: INFO: 	Container pod-handle-http-request ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-3851c80e-5451-4b50-8d61-8d8ac097dabc 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-3851c80e-5451-4b50-8d61-8d8ac097dabc off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-3851c80e-5451-4b50-8d61-8d8ac097dabc
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:32:15.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8327" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:313.017 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":114,"skipped":1741,"failed":0}
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:32:15.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5278
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5278
STEP: creating replication controller externalsvc in namespace services-5278
I0817 22:32:15.547828       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5278, replica count: 2
I0817 22:32:18.599140       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 22:32:21.599777       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 17 22:32:21.943: INFO: Creating new exec pod
Aug 17 22:32:27.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5278 execpod25pvj -- /bin/sh -x -c nslookup clusterip-service'
Aug 17 22:32:34.004: INFO: stderr: "I0817 22:32:33.897353    2448 log.go:172] (0x4000adae70) (0x40008bc0a0) Create stream\nI0817 22:32:33.899854    2448 log.go:172] (0x4000adae70) (0x40008bc0a0) Stream added, broadcasting: 1\nI0817 22:32:33.911017    2448 log.go:172] (0x4000adae70) Reply frame received for 1\nI0817 22:32:33.911535    2448 log.go:172] (0x4000adae70) (0x40009e21e0) Create stream\nI0817 22:32:33.911589    2448 log.go:172] (0x4000adae70) (0x40009e21e0) Stream added, broadcasting: 3\nI0817 22:32:33.913142    2448 log.go:172] (0x4000adae70) Reply frame received for 3\nI0817 22:32:33.913378    2448 log.go:172] (0x4000adae70) (0x400092c0a0) Create stream\nI0817 22:32:33.913454    2448 log.go:172] (0x4000adae70) (0x400092c0a0) Stream added, broadcasting: 5\nI0817 22:32:33.914859    2448 log.go:172] (0x4000adae70) Reply frame received for 5\nI0817 22:32:33.966394    2448 log.go:172] (0x4000adae70) Data frame received for 5\nI0817 22:32:33.966664    2448 log.go:172] (0x400092c0a0) (5) Data frame handling\nI0817 22:32:33.967295    2448 log.go:172] (0x400092c0a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0817 22:32:33.974032    2448 log.go:172] (0x4000adae70) Data frame received for 3\nI0817 22:32:33.974172    2448 log.go:172] (0x40009e21e0) (3) Data frame handling\nI0817 22:32:33.974324    2448 log.go:172] (0x40009e21e0) (3) Data frame sent\nI0817 22:32:33.975182    2448 log.go:172] (0x4000adae70) Data frame received for 3\nI0817 22:32:33.975353    2448 log.go:172] (0x40009e21e0) (3) Data frame handling\nI0817 22:32:33.975493    2448 log.go:172] (0x4000adae70) Data frame received for 5\nI0817 22:32:33.975617    2448 log.go:172] (0x400092c0a0) (5) Data frame handling\nI0817 22:32:33.975725    2448 log.go:172] (0x40009e21e0) (3) Data frame sent\nI0817 22:32:33.975854    2448 log.go:172] (0x4000adae70) Data frame received for 3\nI0817 22:32:33.975956    2448 log.go:172] (0x40009e21e0) (3) Data frame handling\nI0817 22:32:33.977330    2448 log.go:172] (0x4000adae70) Data frame received for 1\nI0817 22:32:33.977432    2448 log.go:172] (0x40008bc0a0) (1) Data frame handling\nI0817 22:32:33.977542    2448 log.go:172] (0x40008bc0a0) (1) Data frame sent\nI0817 22:32:33.978826    2448 log.go:172] (0x4000adae70) (0x40008bc0a0) Stream removed, broadcasting: 1\nI0817 22:32:33.991257    2448 log.go:172] (0x4000adae70) Go away received\nI0817 22:32:33.993318    2448 log.go:172] (0x4000adae70) (0x40008bc0a0) Stream removed, broadcasting: 1\nI0817 22:32:33.993846    2448 log.go:172] (0x4000adae70) (0x40009e21e0) Stream removed, broadcasting: 3\nI0817 22:32:33.994017    2448 log.go:172] (0x4000adae70) (0x400092c0a0) Stream removed, broadcasting: 5\n"
Aug 17 22:32:34.004: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5278.svc.cluster.local\tcanonical name = externalsvc.services-5278.svc.cluster.local.\nName:\texternalsvc.services-5278.svc.cluster.local\nAddress: 10.111.219.208\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5278, will wait for the garbage collector to delete the pods
Aug 17 22:32:34.079: INFO: Deleting ReplicationController externalsvc took: 7.830923ms
Aug 17 22:32:34.480: INFO: Terminating ReplicationController externalsvc pods took: 400.932719ms
Aug 17 22:32:42.794: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:32:43.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5278" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:29.039 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":115,"skipped":1741,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:32:44.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 17 22:32:45.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7784'
Aug 17 22:32:46.893: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 17 22:32:46.893: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Aug 17 22:32:47.262: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-2dmr4]
Aug 17 22:32:47.262: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-2dmr4" in namespace "kubectl-7784" to be "running and ready"
Aug 17 22:32:47.335: INFO: Pod "e2e-test-httpd-rc-2dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 73.286806ms
Aug 17 22:32:49.595: INFO: Pod "e2e-test-httpd-rc-2dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332931485s
Aug 17 22:32:51.924: INFO: Pod "e2e-test-httpd-rc-2dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.661862802s
Aug 17 22:32:53.931: INFO: Pod "e2e-test-httpd-rc-2dmr4": Phase="Running", Reason="", readiness=true. Elapsed: 6.669368153s
Aug 17 22:32:53.932: INFO: Pod "e2e-test-httpd-rc-2dmr4" satisfied condition "running and ready"
Aug 17 22:32:53.932: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-2dmr4]
Aug 17 22:32:53.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7784'
Aug 17 22:32:55.314: INFO: stderr: ""
Aug 17 22:32:55.314: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.103. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.103. Set the 'ServerName' directive globally to suppress this message\n[Mon Aug 17 22:32:52.924347 2020] [mpm_event:notice] [pid 1:tid 139826292112232] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Aug 17 22:32:52.924395 2020] [core:notice] [pid 1:tid 139826292112232] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Aug 17 22:32:55.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7784'
Aug 17 22:32:56.543: INFO: stderr: ""
Aug 17 22:32:56.543: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:32:56.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7784" for this suite.

• [SLOW TEST:12.216 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
    should create an rc from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":116,"skipped":1759,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:32:56.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:32:56.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 17 22:33:15.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7236 create -f -'
Aug 17 22:33:21.456: INFO: stderr: ""
Aug 17 22:33:21.456: INFO: stdout: "e2e-test-crd-publish-openapi-122-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 17 22:33:21.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7236 delete e2e-test-crd-publish-openapi-122-crds test-cr'
Aug 17 22:33:22.705: INFO: stderr: ""
Aug 17 22:33:22.705: INFO: stdout: "e2e-test-crd-publish-openapi-122-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 17 22:33:22.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7236 apply -f -'
Aug 17 22:33:24.347: INFO: stderr: ""
Aug 17 22:33:24.347: INFO: stdout: "e2e-test-crd-publish-openapi-122-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 17 22:33:24.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7236 delete e2e-test-crd-publish-openapi-122-crds test-cr'
Aug 17 22:33:25.609: INFO: stderr: ""
Aug 17 22:33:25.609: INFO: stdout: "e2e-test-crd-publish-openapi-122-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 17 22:33:25.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-122-crds'
Aug 17 22:33:27.166: INFO: stderr: ""
Aug 17 22:33:27.167: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-122-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:33:45.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7236" for this suite.

• [SLOW TEST:49.659 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":117,"skipped":1761,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:33:46.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-caafd57b-bbbd-4199-83dc-a4c082f983ec
STEP: Creating a pod to test consume configMaps
Aug 17 22:33:46.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-6144fe0c-a8b7-4361-a263-448af862a845" in namespace "configmap-5115" to be "success or failure"
Aug 17 22:33:46.365: INFO: Pod "pod-configmaps-6144fe0c-a8b7-4361-a263-448af862a845": Phase="Pending", Reason="", readiness=false. Elapsed: 17.61963ms
Aug 17 22:33:48.371: INFO: Pod "pod-configmaps-6144fe0c-a8b7-4361-a263-448af862a845": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023223916s
Aug 17 22:33:50.378: INFO: Pod "pod-configmaps-6144fe0c-a8b7-4361-a263-448af862a845": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03004552s
STEP: Saw pod success
Aug 17 22:33:50.378: INFO: Pod "pod-configmaps-6144fe0c-a8b7-4361-a263-448af862a845" satisfied condition "success or failure"
Aug 17 22:33:50.382: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6144fe0c-a8b7-4361-a263-448af862a845 container configmap-volume-test: 
STEP: delete the pod
Aug 17 22:33:50.415: INFO: Waiting for pod pod-configmaps-6144fe0c-a8b7-4361-a263-448af862a845 to disappear
Aug 17 22:33:50.424: INFO: Pod pod-configmaps-6144fe0c-a8b7-4361-a263-448af862a845 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:33:50.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5115" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1763,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:33:50.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 22:33:50.553: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20766e0f-cbed-46d5-a14a-7fe2acf146a9" in namespace "projected-9399" to be "success or failure"
Aug 17 22:33:50.583: INFO: Pod "downwardapi-volume-20766e0f-cbed-46d5-a14a-7fe2acf146a9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.730977ms
Aug 17 22:33:52.589: INFO: Pod "downwardapi-volume-20766e0f-cbed-46d5-a14a-7fe2acf146a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035520779s
Aug 17 22:33:54.597: INFO: Pod "downwardapi-volume-20766e0f-cbed-46d5-a14a-7fe2acf146a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043305478s
STEP: Saw pod success
Aug 17 22:33:54.597: INFO: Pod "downwardapi-volume-20766e0f-cbed-46d5-a14a-7fe2acf146a9" satisfied condition "success or failure"
Aug 17 22:33:54.602: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-20766e0f-cbed-46d5-a14a-7fe2acf146a9 container client-container: 
STEP: delete the pod
Aug 17 22:33:54.635: INFO: Waiting for pod downwardapi-volume-20766e0f-cbed-46d5-a14a-7fe2acf146a9 to disappear
Aug 17 22:33:54.672: INFO: Pod downwardapi-volume-20766e0f-cbed-46d5-a14a-7fe2acf146a9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:33:54.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9399" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1781,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:33:54.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:33:54.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3502" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":120,"skipped":1797,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:33:54.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Aug 17 22:33:55.204: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix743847417/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:33:56.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5579" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":121,"skipped":1816,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:33:56.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 17 22:33:57.010: INFO: Waiting up to 5m0s for pod "pod-be1da2b8-5403-4f39-83bf-6bf53cfe02b3" in namespace "emptydir-7909" to be "success or failure"
Aug 17 22:33:57.023: INFO: Pod "pod-be1da2b8-5403-4f39-83bf-6bf53cfe02b3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.943204ms
Aug 17 22:33:59.032: INFO: Pod "pod-be1da2b8-5403-4f39-83bf-6bf53cfe02b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021315589s
Aug 17 22:34:01.039: INFO: Pod "pod-be1da2b8-5403-4f39-83bf-6bf53cfe02b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0279528s
STEP: Saw pod success
Aug 17 22:34:01.039: INFO: Pod "pod-be1da2b8-5403-4f39-83bf-6bf53cfe02b3" satisfied condition "success or failure"
Aug 17 22:34:01.043: INFO: Trying to get logs from node jerma-worker2 pod pod-be1da2b8-5403-4f39-83bf-6bf53cfe02b3 container test-container: 
STEP: delete the pod
Aug 17 22:34:01.098: INFO: Waiting for pod pod-be1da2b8-5403-4f39-83bf-6bf53cfe02b3 to disappear
Aug 17 22:34:01.102: INFO: Pod pod-be1da2b8-5403-4f39-83bf-6bf53cfe02b3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:34:01.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7909" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1819,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:34:01.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:34:01.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:34:07.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4674" for this suite.

• [SLOW TEST:6.516 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1909,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:34:07.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:34:21.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2864" for this suite.

• [SLOW TEST:13.967 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":124,"skipped":1912,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:34:21.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:34:41.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9392" for this suite.

• [SLOW TEST:20.014 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":125,"skipped":1957,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:34:41.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:34:42.333: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-99e6cd77-79a1-420c-a617-7f377ef2ece3" in namespace "security-context-test-714" to be "success or failure"
Aug 17 22:34:42.602: INFO: Pod "busybox-readonly-false-99e6cd77-79a1-420c-a617-7f377ef2ece3": Phase="Pending", Reason="", readiness=false. Elapsed: 268.771692ms
Aug 17 22:34:44.609: INFO: Pod "busybox-readonly-false-99e6cd77-79a1-420c-a617-7f377ef2ece3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275513678s
Aug 17 22:34:46.747: INFO: Pod "busybox-readonly-false-99e6cd77-79a1-420c-a617-7f377ef2ece3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.413295051s
Aug 17 22:34:48.881: INFO: Pod "busybox-readonly-false-99e6cd77-79a1-420c-a617-7f377ef2ece3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.547469922s
Aug 17 22:34:50.895: INFO: Pod "busybox-readonly-false-99e6cd77-79a1-420c-a617-7f377ef2ece3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.561741331s
Aug 17 22:34:50.896: INFO: Pod "busybox-readonly-false-99e6cd77-79a1-420c-a617-7f377ef2ece3" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:34:50.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-714" for this suite.

• [SLOW TEST:9.561 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1982,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:34:51.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:34:52.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 17 22:35:11.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1278 create -f -'
Aug 17 22:35:16.415: INFO: stderr: ""
Aug 17 22:35:16.415: INFO: stdout: "e2e-test-crd-publish-openapi-3428-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 17 22:35:16.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1278 delete e2e-test-crd-publish-openapi-3428-crds test-cr'
Aug 17 22:35:17.676: INFO: stderr: ""
Aug 17 22:35:17.676: INFO: stdout: "e2e-test-crd-publish-openapi-3428-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 17 22:35:17.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1278 apply -f -'
Aug 17 22:35:19.248: INFO: stderr: ""
Aug 17 22:35:19.248: INFO: stdout: "e2e-test-crd-publish-openapi-3428-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 17 22:35:19.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1278 delete e2e-test-crd-publish-openapi-3428-crds test-cr'
Aug 17 22:35:20.516: INFO: stderr: ""
Aug 17 22:35:20.516: INFO: stdout: "e2e-test-crd-publish-openapi-3428-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 17 22:35:20.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3428-crds'
Aug 17 22:35:23.296: INFO: stderr: ""
Aug 17 22:35:23.296: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3428-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:35:42.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1278" for this suite.

• [SLOW TEST:51.027 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":127,"skipped":1996,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:35:42.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:36:01.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2161" for this suite.

• [SLOW TEST:19.042 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":128,"skipped":1998,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:36:01.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:36:05.598: INFO: Waiting up to 5m0s for pod "client-envvars-7fbad8ac-bb73-4ec7-989e-dd39b42c2d7a" in namespace "pods-1727" to be "success or failure"
Aug 17 22:36:05.644: INFO: Pod "client-envvars-7fbad8ac-bb73-4ec7-989e-dd39b42c2d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 45.479905ms
Aug 17 22:36:07.956: INFO: Pod "client-envvars-7fbad8ac-bb73-4ec7-989e-dd39b42c2d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357726828s
Aug 17 22:36:09.963: INFO: Pod "client-envvars-7fbad8ac-bb73-4ec7-989e-dd39b42c2d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364939393s
Aug 17 22:36:11.971: INFO: Pod "client-envvars-7fbad8ac-bb73-4ec7-989e-dd39b42c2d7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.372388823s
STEP: Saw pod success
Aug 17 22:36:11.971: INFO: Pod "client-envvars-7fbad8ac-bb73-4ec7-989e-dd39b42c2d7a" satisfied condition "success or failure"
Aug 17 22:36:11.976: INFO: Trying to get logs from node jerma-worker pod client-envvars-7fbad8ac-bb73-4ec7-989e-dd39b42c2d7a container env3cont: 
STEP: delete the pod
Aug 17 22:36:12.419: INFO: Waiting for pod client-envvars-7fbad8ac-bb73-4ec7-989e-dd39b42c2d7a to disappear
Aug 17 22:36:12.483: INFO: Pod client-envvars-7fbad8ac-bb73-4ec7-989e-dd39b42c2d7a no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:36:12.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1727" for this suite.

• [SLOW TEST:11.367 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2008,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:36:12.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 17 22:36:19.285: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:36:19.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2538" for this suite.

• [SLOW TEST:6.686 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2032,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:36:19.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Aug 17 22:36:23.738: INFO: Pod pod-hostip-e40ba474-dfce-499c-9b94-c1c571da29b5 has hostIP: 172.18.0.6
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:36:23.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-670" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2033,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:36:23.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 17 22:36:23.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1154'
Aug 17 22:36:25.538: INFO: stderr: ""
Aug 17 22:36:25.538: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 17 22:36:26.546: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:36:26.547: INFO: Found 0 / 1
Aug 17 22:36:27.633: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:36:27.633: INFO: Found 0 / 1
Aug 17 22:36:28.546: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:36:28.546: INFO: Found 0 / 1
Aug 17 22:36:29.546: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:36:29.547: INFO: Found 1 / 1
Aug 17 22:36:29.547: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 17 22:36:29.552: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:36:29.552: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 17 22:36:29.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-h9hhl --namespace=kubectl-1154 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 17 22:36:30.796: INFO: stderr: ""
Aug 17 22:36:30.796: INFO: stdout: "pod/agnhost-master-h9hhl patched\n"
STEP: checking annotations
Aug 17 22:36:30.803: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 17 22:36:30.803: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:36:30.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1154" for this suite.

• [SLOW TEST:7.065 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":132,"skipped":2051,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:36:30.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 17 22:36:30.905: INFO: Waiting up to 5m0s for pod "pod-84affa5d-18ce-49eb-9a38-0f80e20b70d0" in namespace "emptydir-8422" to be "success or failure"
Aug 17 22:36:30.923: INFO: Pod "pod-84affa5d-18ce-49eb-9a38-0f80e20b70d0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.506687ms
Aug 17 22:36:32.930: INFO: Pod "pod-84affa5d-18ce-49eb-9a38-0f80e20b70d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024089775s
Aug 17 22:36:34.935: INFO: Pod "pod-84affa5d-18ce-49eb-9a38-0f80e20b70d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029427624s
STEP: Saw pod success
Aug 17 22:36:34.935: INFO: Pod "pod-84affa5d-18ce-49eb-9a38-0f80e20b70d0" satisfied condition "success or failure"
Aug 17 22:36:34.939: INFO: Trying to get logs from node jerma-worker pod pod-84affa5d-18ce-49eb-9a38-0f80e20b70d0 container test-container: 
STEP: delete the pod
Aug 17 22:36:34.974: INFO: Waiting for pod pod-84affa5d-18ce-49eb-9a38-0f80e20b70d0 to disappear
Aug 17 22:36:35.007: INFO: Pod pod-84affa5d-18ce-49eb-9a38-0f80e20b70d0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:36:35.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8422" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2085,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:36:35.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-3ecd8ab0-971a-4f6c-a27e-916606234104
STEP: Creating a pod to test consume secrets
Aug 17 22:36:35.118: INFO: Waiting up to 5m0s for pod "pod-secrets-14ce7e29-173a-4dbb-9342-ab6aec017efb" in namespace "secrets-3226" to be "success or failure"
Aug 17 22:36:35.140: INFO: Pod "pod-secrets-14ce7e29-173a-4dbb-9342-ab6aec017efb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.596683ms
Aug 17 22:36:37.147: INFO: Pod "pod-secrets-14ce7e29-173a-4dbb-9342-ab6aec017efb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028856014s
Aug 17 22:36:39.352: INFO: Pod "pod-secrets-14ce7e29-173a-4dbb-9342-ab6aec017efb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233694116s
Aug 17 22:36:41.478: INFO: Pod "pod-secrets-14ce7e29-173a-4dbb-9342-ab6aec017efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.359978493s
STEP: Saw pod success
Aug 17 22:36:41.478: INFO: Pod "pod-secrets-14ce7e29-173a-4dbb-9342-ab6aec017efb" satisfied condition "success or failure"
Aug 17 22:36:41.500: INFO: Trying to get logs from node jerma-worker pod pod-secrets-14ce7e29-173a-4dbb-9342-ab6aec017efb container secret-env-test: 
STEP: delete the pod
Aug 17 22:36:41.935: INFO: Waiting for pod pod-secrets-14ce7e29-173a-4dbb-9342-ab6aec017efb to disappear
Aug 17 22:36:41.948: INFO: Pod pod-secrets-14ce7e29-173a-4dbb-9342-ab6aec017efb no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:36:41.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3226" for this suite.

• [SLOW TEST:7.057 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2124,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:36:42.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:36:42.240: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f3fbff60-f3b4-4d0c-b578-f1898bd778c1" in namespace "security-context-test-3531" to be "success or failure"
Aug 17 22:36:42.272: INFO: Pod "alpine-nnp-false-f3fbff60-f3b4-4d0c-b578-f1898bd778c1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.433521ms
Aug 17 22:36:44.465: INFO: Pod "alpine-nnp-false-f3fbff60-f3b4-4d0c-b578-f1898bd778c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225021721s
Aug 17 22:36:46.478: INFO: Pod "alpine-nnp-false-f3fbff60-f3b4-4d0c-b578-f1898bd778c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23792987s
Aug 17 22:36:48.573: INFO: Pod "alpine-nnp-false-f3fbff60-f3b4-4d0c-b578-f1898bd778c1": Phase="Running", Reason="", readiness=true. Elapsed: 6.333286621s
Aug 17 22:36:50.578: INFO: Pod "alpine-nnp-false-f3fbff60-f3b4-4d0c-b578-f1898bd778c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.338305076s
Aug 17 22:36:50.578: INFO: Pod "alpine-nnp-false-f3fbff60-f3b4-4d0c-b578-f1898bd778c1" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:36:50.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3531" for this suite.

• [SLOW TEST:8.761 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2145,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:36:50.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 17 22:36:55.740: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:36:55.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6078" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2195,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:36:55.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 17 22:36:55.930: INFO: Waiting up to 5m0s for pod "downward-api-2c7c7ff4-0762-4806-a8c7-e43bf7f24429" in namespace "downward-api-8212" to be "success or failure"
Aug 17 22:36:55.963: INFO: Pod "downward-api-2c7c7ff4-0762-4806-a8c7-e43bf7f24429": Phase="Pending", Reason="", readiness=false. Elapsed: 32.775192ms
Aug 17 22:36:57.970: INFO: Pod "downward-api-2c7c7ff4-0762-4806-a8c7-e43bf7f24429": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039390194s
Aug 17 22:36:59.980: INFO: Pod "downward-api-2c7c7ff4-0762-4806-a8c7-e43bf7f24429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050044539s
STEP: Saw pod success
Aug 17 22:36:59.980: INFO: Pod "downward-api-2c7c7ff4-0762-4806-a8c7-e43bf7f24429" satisfied condition "success or failure"
Aug 17 22:36:59.985: INFO: Trying to get logs from node jerma-worker2 pod downward-api-2c7c7ff4-0762-4806-a8c7-e43bf7f24429 container dapi-container: 
STEP: delete the pod
Aug 17 22:37:00.202: INFO: Waiting for pod downward-api-2c7c7ff4-0762-4806-a8c7-e43bf7f24429 to disappear
Aug 17 22:37:00.381: INFO: Pod downward-api-2c7c7ff4-0762-4806-a8c7-e43bf7f24429 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:37:00.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8212" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2197,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:37:00.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 17 22:37:08.854: INFO: 8 pods remaining
Aug 17 22:37:08.855: INFO: 0 pods has nil DeletionTimestamp
Aug 17 22:37:08.855: INFO: 
Aug 17 22:37:10.747: INFO: 0 pods remaining
Aug 17 22:37:10.747: INFO: 0 pods has nil DeletionTimestamp
Aug 17 22:37:10.747: INFO: 
STEP: Gathering metrics
W0817 22:37:11.879663       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 17 22:37:11.880: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:37:11.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5527" for this suite.

• [SLOW TEST:12.078 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":138,"skipped":2200,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:37:12.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7851
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7851
STEP: Creating statefulset with conflicting port in namespace statefulset-7851
STEP: Waiting until pod test-pod will start running in namespace statefulset-7851
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7851
Aug 17 22:37:17.662: INFO: Observed stateful pod in namespace: statefulset-7851, name: ss-0, uid: 1a5bb3c4-0320-4076-bcb3-c38aca25c03e, status phase: Failed. Waiting for statefulset controller to delete.
Aug 17 22:37:17.663: INFO: Observed stateful pod in namespace: statefulset-7851, name: ss-0, uid: 1a5bb3c4-0320-4076-bcb3-c38aca25c03e, status phase: Failed. Waiting for statefulset controller to delete.
Aug 17 22:37:17.677: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7851
STEP: Removing pod with conflicting port in namespace statefulset-7851
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7851 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 17 22:37:23.930: INFO: Deleting all statefulset in ns statefulset-7851
Aug 17 22:37:23.935: INFO: Scaling statefulset ss to 0
Aug 17 22:37:34.032: INFO: Waiting for statefulset status.replicas updated to 0
Aug 17 22:37:34.046: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:37:34.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7851" for this suite.

• [SLOW TEST:21.614 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":139,"skipped":2241,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:37:34.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 17 22:37:34.298: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-a a8f88d0f-ef74-4dbc-ba8b-2b295b2d4ace 888973 0 2020-08-17 22:37:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 17 22:37:34.300: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-a a8f88d0f-ef74-4dbc-ba8b-2b295b2d4ace 888973 0 2020-08-17 22:37:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 17 22:37:44.315: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-a a8f88d0f-ef74-4dbc-ba8b-2b295b2d4ace 889028 0 2020-08-17 22:37:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 17 22:37:44.315: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-a a8f88d0f-ef74-4dbc-ba8b-2b295b2d4ace 889028 0 2020-08-17 22:37:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 17 22:37:54.327: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-a a8f88d0f-ef74-4dbc-ba8b-2b295b2d4ace 889058 0 2020-08-17 22:37:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 17 22:37:54.333: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-a a8f88d0f-ef74-4dbc-ba8b-2b295b2d4ace 889058 0 2020-08-17 22:37:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 17 22:38:04.341: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-a a8f88d0f-ef74-4dbc-ba8b-2b295b2d4ace 889086 0 2020-08-17 22:37:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 17 22:38:04.342: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-a a8f88d0f-ef74-4dbc-ba8b-2b295b2d4ace 889086 0 2020-08-17 22:37:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 17 22:38:14.351: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-b 693f7846-e794-4c17-b803-410f9c671c08 889116 0 2020-08-17 22:38:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 17 22:38:14.352: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-b 693f7846-e794-4c17-b803-410f9c671c08 889116 0 2020-08-17 22:38:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 17 22:38:24.359: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-b 693f7846-e794-4c17-b803-410f9c671c08 889146 0 2020-08-17 22:38:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 17 22:38:24.360: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8825 /api/v1/namespaces/watch-8825/configmaps/e2e-watch-test-configmap-b 693f7846-e794-4c17-b803-410f9c671c08 889146 0 2020-08-17 22:38:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:38:34.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8825" for this suite.

• [SLOW TEST:60.254 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":140,"skipped":2280,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:38:34.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 22:38:34.451: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89b682d1-03c8-4e42-863c-105db0064c10" in namespace "projected-6597" to be "success or failure"
Aug 17 22:38:34.491: INFO: Pod "downwardapi-volume-89b682d1-03c8-4e42-863c-105db0064c10": Phase="Pending", Reason="", readiness=false. Elapsed: 40.289794ms
Aug 17 22:38:36.515: INFO: Pod "downwardapi-volume-89b682d1-03c8-4e42-863c-105db0064c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063791425s
Aug 17 22:38:38.522: INFO: Pod "downwardapi-volume-89b682d1-03c8-4e42-863c-105db0064c10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071088403s
STEP: Saw pod success
Aug 17 22:38:38.522: INFO: Pod "downwardapi-volume-89b682d1-03c8-4e42-863c-105db0064c10" satisfied condition "success or failure"
Aug 17 22:38:38.528: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-89b682d1-03c8-4e42-863c-105db0064c10 container client-container: 
STEP: delete the pod
Aug 17 22:38:38.572: INFO: Waiting for pod downwardapi-volume-89b682d1-03c8-4e42-863c-105db0064c10 to disappear
Aug 17 22:38:38.604: INFO: Pod downwardapi-volume-89b682d1-03c8-4e42-863c-105db0064c10 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:38:38.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6597" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2281,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:38:38.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-900/secret-test-c99761e6-ff30-41ed-8871-c245107473be
STEP: Creating a pod to test consume secrets
Aug 17 22:38:38.733: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1bfa0d7-ce52-456c-8524-3664c744c86a" in namespace "secrets-900" to be "success or failure"
Aug 17 22:38:38.741: INFO: Pod "pod-configmaps-a1bfa0d7-ce52-456c-8524-3664c744c86a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.497821ms
Aug 17 22:38:40.747: INFO: Pod "pod-configmaps-a1bfa0d7-ce52-456c-8524-3664c744c86a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014095962s
Aug 17 22:38:42.754: INFO: Pod "pod-configmaps-a1bfa0d7-ce52-456c-8524-3664c744c86a": Phase="Running", Reason="", readiness=true. Elapsed: 4.02149707s
Aug 17 22:38:44.762: INFO: Pod "pod-configmaps-a1bfa0d7-ce52-456c-8524-3664c744c86a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029176925s
STEP: Saw pod success
Aug 17 22:38:44.762: INFO: Pod "pod-configmaps-a1bfa0d7-ce52-456c-8524-3664c744c86a" satisfied condition "success or failure"
Aug 17 22:38:44.767: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a1bfa0d7-ce52-456c-8524-3664c744c86a container env-test: 
STEP: delete the pod
Aug 17 22:38:44.821: INFO: Waiting for pod pod-configmaps-a1bfa0d7-ce52-456c-8524-3664c744c86a to disappear
Aug 17 22:38:44.827: INFO: Pod pod-configmaps-a1bfa0d7-ce52-456c-8524-3664c744c86a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:38:44.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-900" for this suite.

• [SLOW TEST:6.215 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2290,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:38:44.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 17 22:38:44.938: INFO: Waiting up to 5m0s for pod "downward-api-f814d2d8-a942-4433-837e-7270bcbae535" in namespace "downward-api-2109" to be "success or failure"
Aug 17 22:38:44.941: INFO: Pod "downward-api-f814d2d8-a942-4433-837e-7270bcbae535": Phase="Pending", Reason="", readiness=false. Elapsed: 3.731187ms
Aug 17 22:38:46.947: INFO: Pod "downward-api-f814d2d8-a942-4433-837e-7270bcbae535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009622317s
Aug 17 22:38:48.954: INFO: Pod "downward-api-f814d2d8-a942-4433-837e-7270bcbae535": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016291967s
STEP: Saw pod success
Aug 17 22:38:48.954: INFO: Pod "downward-api-f814d2d8-a942-4433-837e-7270bcbae535" satisfied condition "success or failure"
Aug 17 22:38:48.958: INFO: Trying to get logs from node jerma-worker pod downward-api-f814d2d8-a942-4433-837e-7270bcbae535 container dapi-container: 
STEP: delete the pod
Aug 17 22:38:49.013: INFO: Waiting for pod downward-api-f814d2d8-a942-4433-837e-7270bcbae535 to disappear
Aug 17 22:38:49.026: INFO: Pod downward-api-f814d2d8-a942-4433-837e-7270bcbae535 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:38:49.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2109" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2293,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:38:49.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-a764eebe-69ea-4a77-9b1b-e2d544ad66f1
STEP: Creating a pod to test consume secrets
Aug 17 22:38:49.188: INFO: Waiting up to 5m0s for pod "pod-secrets-ae669fa1-76a6-4597-9667-8d447209427f" in namespace "secrets-2389" to be "success or failure"
Aug 17 22:38:49.193: INFO: Pod "pod-secrets-ae669fa1-76a6-4597-9667-8d447209427f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.630957ms
Aug 17 22:38:51.335: INFO: Pod "pod-secrets-ae669fa1-76a6-4597-9667-8d447209427f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146234105s
Aug 17 22:38:53.342: INFO: Pod "pod-secrets-ae669fa1-76a6-4597-9667-8d447209427f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153291386s
STEP: Saw pod success
Aug 17 22:38:53.342: INFO: Pod "pod-secrets-ae669fa1-76a6-4597-9667-8d447209427f" satisfied condition "success or failure"
Aug 17 22:38:53.347: INFO: Trying to get logs from node jerma-worker pod pod-secrets-ae669fa1-76a6-4597-9667-8d447209427f container secret-volume-test: 
STEP: delete the pod
Aug 17 22:38:54.124: INFO: Waiting for pod pod-secrets-ae669fa1-76a6-4597-9667-8d447209427f to disappear
Aug 17 22:38:54.159: INFO: Pod pod-secrets-ae669fa1-76a6-4597-9667-8d447209427f no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:38:54.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2389" for this suite.

• [SLOW TEST:5.262 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2307,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:38:54.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-02c3e806-1dfd-45cd-8200-f5c80de8ab2d in namespace container-probe-3982
Aug 17 22:38:58.581: INFO: Started pod busybox-02c3e806-1dfd-45cd-8200-f5c80de8ab2d in namespace container-probe-3982
STEP: checking the pod's current state and verifying that restartCount is present
Aug 17 22:38:58.586: INFO: Initial restart count of pod busybox-02c3e806-1dfd-45cd-8200-f5c80de8ab2d is 0
Aug 17 22:39:54.890: INFO: Restart count of pod container-probe-3982/busybox-02c3e806-1dfd-45cd-8200-f5c80de8ab2d is now 1 (56.303657899s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:39:54.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3982" for this suite.

• [SLOW TEST:60.635 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2345,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:39:54.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 17 22:39:55.028: INFO: Waiting up to 5m0s for pod "pod-c52c7102-feb9-401e-b527-989e901ecf9c" in namespace "emptydir-3383" to be "success or failure"
Aug 17 22:39:55.065: INFO: Pod "pod-c52c7102-feb9-401e-b527-989e901ecf9c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.187645ms
Aug 17 22:39:57.072: INFO: Pod "pod-c52c7102-feb9-401e-b527-989e901ecf9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042925038s
Aug 17 22:39:59.078: INFO: Pod "pod-c52c7102-feb9-401e-b527-989e901ecf9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049562198s
Aug 17 22:40:01.947: INFO: Pod "pod-c52c7102-feb9-401e-b527-989e901ecf9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.918364083s
STEP: Saw pod success
Aug 17 22:40:01.947: INFO: Pod "pod-c52c7102-feb9-401e-b527-989e901ecf9c" satisfied condition "success or failure"
Aug 17 22:40:01.985: INFO: Trying to get logs from node jerma-worker pod pod-c52c7102-feb9-401e-b527-989e901ecf9c container test-container: 
STEP: delete the pod
Aug 17 22:40:02.361: INFO: Waiting for pod pod-c52c7102-feb9-401e-b527-989e901ecf9c to disappear
Aug 17 22:40:02.365: INFO: Pod pod-c52c7102-feb9-401e-b527-989e901ecf9c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:40:02.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3383" for this suite.

• [SLOW TEST:7.574 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2348,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:40:02.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5547
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 17 22:40:03.040: INFO: Found 0 stateful pods, waiting for 3
Aug 17 22:40:13.049: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 22:40:13.050: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 22:40:13.050: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 17 22:40:23.050: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 22:40:23.050: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 22:40:23.050: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 17 22:40:23.086: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 17 22:40:33.211: INFO: Updating stateful set ss2
Aug 17 22:40:33.241: INFO: Waiting for Pod statefulset-5547/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 17 22:40:44.838: INFO: Found 2 stateful pods, waiting for 3
Aug 17 22:40:54.848: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 22:40:54.849: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 22:40:54.849: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 17 22:40:54.878: INFO: Updating stateful set ss2
Aug 17 22:40:55.324: INFO: Waiting for Pod statefulset-5547/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 17 22:41:05.430: INFO: Updating stateful set ss2
Aug 17 22:41:05.499: INFO: Waiting for StatefulSet statefulset-5547/ss2 to complete update
Aug 17 22:41:05.499: INFO: Waiting for Pod statefulset-5547/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 17 22:41:15.513: INFO: Waiting for StatefulSet statefulset-5547/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 17 22:41:25.513: INFO: Deleting all statefulset in ns statefulset-5547
Aug 17 22:41:25.518: INFO: Scaling statefulset ss2 to 0
Aug 17 22:41:45.543: INFO: Waiting for statefulset status.replicas updated to 0
Aug 17 22:41:45.547: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:41:45.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5547" for this suite.

• [SLOW TEST:103.080 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":147,"skipped":2383,"failed":0}
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:41:45.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 17 22:41:49.898: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 17 22:41:56.134: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:41:56.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1234" for this suite.

• [SLOW TEST:10.552 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":148,"skipped":2387,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:41:56.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:42:00.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8415" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2391,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:42:00.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 17 22:42:00.377: INFO: Waiting up to 5m0s for pod "pod-4fabf3ce-1bcf-461c-b6fe-5665e391da54" in namespace "emptydir-2674" to be "success or failure"
Aug 17 22:42:00.387: INFO: Pod "pod-4fabf3ce-1bcf-461c-b6fe-5665e391da54": Phase="Pending", Reason="", readiness=false. Elapsed: 9.865242ms
Aug 17 22:42:02.393: INFO: Pod "pod-4fabf3ce-1bcf-461c-b6fe-5665e391da54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015628304s
Aug 17 22:42:04.399: INFO: Pod "pod-4fabf3ce-1bcf-461c-b6fe-5665e391da54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021549174s
STEP: Saw pod success
Aug 17 22:42:04.399: INFO: Pod "pod-4fabf3ce-1bcf-461c-b6fe-5665e391da54" satisfied condition "success or failure"
Aug 17 22:42:04.404: INFO: Trying to get logs from node jerma-worker2 pod pod-4fabf3ce-1bcf-461c-b6fe-5665e391da54 container test-container: 
STEP: delete the pod
Aug 17 22:42:04.545: INFO: Waiting for pod pod-4fabf3ce-1bcf-461c-b6fe-5665e391da54 to disappear
Aug 17 22:42:04.612: INFO: Pod pod-4fabf3ce-1bcf-461c-b6fe-5665e391da54 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:42:04.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2674" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2396,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:42:04.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 17 22:42:04.892: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3904 /api/v1/namespaces/watch-3904/configmaps/e2e-watch-test-resource-version 954cc867-8f16-45a3-8f6d-11232fb37984 890271 0 2020-08-17 22:42:04 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 17 22:42:04.893: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3904 /api/v1/namespaces/watch-3904/configmaps/e2e-watch-test-resource-version 954cc867-8f16-45a3-8f6d-11232fb37984 890272 0 2020-08-17 22:42:04 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:42:04.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3904" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":151,"skipped":2421,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:42:04.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-b1a45c82-bc59-47ad-aee2-47bc65791915 in namespace container-probe-7283
Aug 17 22:42:09.618: INFO: Started pod liveness-b1a45c82-bc59-47ad-aee2-47bc65791915 in namespace container-probe-7283
STEP: checking the pod's current state and verifying that restartCount is present
Aug 17 22:42:09.622: INFO: Initial restart count of pod liveness-b1a45c82-bc59-47ad-aee2-47bc65791915 is 0
Aug 17 22:42:35.715: INFO: Restart count of pod container-probe-7283/liveness-b1a45c82-bc59-47ad-aee2-47bc65791915 is now 1 (26.092913763s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:42:35.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7283" for this suite.

• [SLOW TEST:30.861 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2423,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:42:35.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 22:42:35.857: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfa51576-cafa-48a5-8757-ad267f186856" in namespace "downward-api-5383" to be "success or failure"
Aug 17 22:42:36.136: INFO: Pod "downwardapi-volume-dfa51576-cafa-48a5-8757-ad267f186856": Phase="Pending", Reason="", readiness=false. Elapsed: 279.29388ms
Aug 17 22:42:38.160: INFO: Pod "downwardapi-volume-dfa51576-cafa-48a5-8757-ad267f186856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303049293s
Aug 17 22:42:40.167: INFO: Pod "downwardapi-volume-dfa51576-cafa-48a5-8757-ad267f186856": Phase="Running", Reason="", readiness=true. Elapsed: 4.31018876s
Aug 17 22:42:42.174: INFO: Pod "downwardapi-volume-dfa51576-cafa-48a5-8757-ad267f186856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.316675159s
STEP: Saw pod success
Aug 17 22:42:42.174: INFO: Pod "downwardapi-volume-dfa51576-cafa-48a5-8757-ad267f186856" satisfied condition "success or failure"
Aug 17 22:42:42.178: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-dfa51576-cafa-48a5-8757-ad267f186856 container client-container: 
STEP: delete the pod
Aug 17 22:42:42.263: INFO: Waiting for pod downwardapi-volume-dfa51576-cafa-48a5-8757-ad267f186856 to disappear
Aug 17 22:42:42.294: INFO: Pod downwardapi-volume-dfa51576-cafa-48a5-8757-ad267f186856 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:42:42.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5383" for this suite.

• [SLOW TEST:6.534 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2441,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:42:42.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 17 22:42:42.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6554'
Aug 17 22:42:43.709: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 17 22:42:43.709: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Aug 17 22:42:43.736: INFO: scanned /root for discovery docs: 
Aug 17 22:42:43.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6554'
Aug 17 22:43:02.123: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 17 22:43:02.123: INFO: stdout: "Created e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48\nScaling up e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Aug 17 22:43:02.124: INFO: stdout: "Created e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48\nScaling up e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Aug 17 22:43:02.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6554'
Aug 17 22:43:03.426: INFO: stderr: ""
Aug 17 22:43:03.427: INFO: stdout: "e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48-trpfj "
Aug 17 22:43:03.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48-trpfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6554'
Aug 17 22:43:04.677: INFO: stderr: ""
Aug 17 22:43:04.677: INFO: stdout: "true"
Aug 17 22:43:04.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48-trpfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6554'
Aug 17 22:43:05.915: INFO: stderr: ""
Aug 17 22:43:05.915: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Aug 17 22:43:05.915: INFO: e2e-test-httpd-rc-67917e2d7c6fc488aee9f2ebb8d05f48-trpfj is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Aug 17 22:43:05.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6554'
Aug 17 22:43:07.359: INFO: stderr: ""
Aug 17 22:43:07.359: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:43:07.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6554" for this suite.

• [SLOW TEST:25.126 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":154,"skipped":2449,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:43:07.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 17 22:43:07.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2686'
Aug 17 22:43:09.892: INFO: stderr: ""
Aug 17 22:43:09.892: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 17 22:43:09.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2686'
Aug 17 22:43:11.173: INFO: stderr: ""
Aug 17 22:43:11.173: INFO: stdout: "update-demo-nautilus-65h5k update-demo-nautilus-hxvpb "
Aug 17 22:43:11.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65h5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:12.590: INFO: stderr: ""
Aug 17 22:43:12.590: INFO: stdout: ""
Aug 17 22:43:12.590: INFO: update-demo-nautilus-65h5k is created but not running
Aug 17 22:43:17.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2686'
Aug 17 22:43:18.859: INFO: stderr: ""
Aug 17 22:43:18.859: INFO: stdout: "update-demo-nautilus-65h5k update-demo-nautilus-hxvpb "
Aug 17 22:43:18.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65h5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:20.105: INFO: stderr: ""
Aug 17 22:43:20.105: INFO: stdout: "true"
Aug 17 22:43:20.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65h5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:21.339: INFO: stderr: ""
Aug 17 22:43:21.339: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 17 22:43:21.339: INFO: validating pod update-demo-nautilus-65h5k
Aug 17 22:43:21.344: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 17 22:43:21.345: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 17 22:43:21.345: INFO: update-demo-nautilus-65h5k is verified up and running
Aug 17 22:43:21.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hxvpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:22.613: INFO: stderr: ""
Aug 17 22:43:22.613: INFO: stdout: "true"
Aug 17 22:43:22.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hxvpb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:23.880: INFO: stderr: ""
Aug 17 22:43:23.880: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 17 22:43:23.880: INFO: validating pod update-demo-nautilus-hxvpb
Aug 17 22:43:23.887: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 17 22:43:23.887: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 17 22:43:23.887: INFO: update-demo-nautilus-hxvpb is verified up and running
STEP: scaling down the replication controller
Aug 17 22:43:23.897: INFO: scanned /root for discovery docs: 
Aug 17 22:43:23.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2686'
Aug 17 22:43:25.262: INFO: stderr: ""
Aug 17 22:43:25.263: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 17 22:43:25.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2686'
Aug 17 22:43:26.523: INFO: stderr: ""
Aug 17 22:43:26.523: INFO: stdout: "update-demo-nautilus-65h5k update-demo-nautilus-hxvpb "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 17 22:43:31.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2686'
Aug 17 22:43:32.796: INFO: stderr: ""
Aug 17 22:43:32.796: INFO: stdout: "update-demo-nautilus-65h5k "
Aug 17 22:43:32.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65h5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:34.027: INFO: stderr: ""
Aug 17 22:43:34.027: INFO: stdout: "true"
Aug 17 22:43:34.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65h5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:35.263: INFO: stderr: ""
Aug 17 22:43:35.263: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 17 22:43:35.263: INFO: validating pod update-demo-nautilus-65h5k
Aug 17 22:43:35.269: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 17 22:43:35.269: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 17 22:43:35.269: INFO: update-demo-nautilus-65h5k is verified up and running
STEP: scaling up the replication controller
Aug 17 22:43:35.280: INFO: scanned /root for discovery docs: 
Aug 17 22:43:35.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2686'
Aug 17 22:43:36.571: INFO: stderr: ""
Aug 17 22:43:36.571: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 17 22:43:36.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2686'
Aug 17 22:43:37.808: INFO: stderr: ""
Aug 17 22:43:37.808: INFO: stdout: "update-demo-nautilus-65h5k update-demo-nautilus-bcqgs "
Aug 17 22:43:37.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65h5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:39.066: INFO: stderr: ""
Aug 17 22:43:39.066: INFO: stdout: "true"
Aug 17 22:43:39.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-65h5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:40.323: INFO: stderr: ""
Aug 17 22:43:40.324: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 17 22:43:40.324: INFO: validating pod update-demo-nautilus-65h5k
Aug 17 22:43:40.329: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 17 22:43:40.330: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 17 22:43:40.330: INFO: update-demo-nautilus-65h5k is verified up and running
Aug 17 22:43:40.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bcqgs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:41.753: INFO: stderr: ""
Aug 17 22:43:41.753: INFO: stdout: "true"
Aug 17 22:43:41.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bcqgs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2686'
Aug 17 22:43:44.071: INFO: stderr: ""
Aug 17 22:43:44.071: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 17 22:43:44.071: INFO: validating pod update-demo-nautilus-bcqgs
Aug 17 22:43:44.079: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 17 22:43:44.079: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 17 22:43:44.079: INFO: update-demo-nautilus-bcqgs is verified up and running
STEP: using delete to clean up resources
Aug 17 22:43:44.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2686'
Aug 17 22:43:45.394: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 17 22:43:45.395: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 17 22:43:45.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2686'
Aug 17 22:43:47.029: INFO: stderr: "No resources found in kubectl-2686 namespace.\n"
Aug 17 22:43:47.029: INFO: stdout: ""
Aug 17 22:43:47.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2686 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 17 22:43:48.354: INFO: stderr: ""
Aug 17 22:43:48.354: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:43:48.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2686" for this suite.

• [SLOW TEST:41.039 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":155,"skipped":2462,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:43:48.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-f351a3b9-6d00-44a1-8e80-3684b950b515
STEP: Creating a pod to test consume secrets
Aug 17 22:43:48.840: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-23ddfccb-6354-42d0-b2a0-56c6daa1d080" in namespace "projected-434" to be "success or failure"
Aug 17 22:43:48.864: INFO: Pod "pod-projected-secrets-23ddfccb-6354-42d0-b2a0-56c6daa1d080": Phase="Pending", Reason="", readiness=false. Elapsed: 23.740653ms
Aug 17 22:43:50.896: INFO: Pod "pod-projected-secrets-23ddfccb-6354-42d0-b2a0-56c6daa1d080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055710726s
Aug 17 22:43:52.903: INFO: Pod "pod-projected-secrets-23ddfccb-6354-42d0-b2a0-56c6daa1d080": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062835185s
Aug 17 22:43:54.911: INFO: Pod "pod-projected-secrets-23ddfccb-6354-42d0-b2a0-56c6daa1d080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070541274s
STEP: Saw pod success
Aug 17 22:43:54.911: INFO: Pod "pod-projected-secrets-23ddfccb-6354-42d0-b2a0-56c6daa1d080" satisfied condition "success or failure"
Aug 17 22:43:54.917: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-23ddfccb-6354-42d0-b2a0-56c6daa1d080 container projected-secret-volume-test: 
STEP: delete the pod
Aug 17 22:43:55.291: INFO: Waiting for pod pod-projected-secrets-23ddfccb-6354-42d0-b2a0-56c6daa1d080 to disappear
Aug 17 22:43:55.357: INFO: Pod pod-projected-secrets-23ddfccb-6354-42d0-b2a0-56c6daa1d080 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:43:55.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-434" for this suite.

• [SLOW TEST:7.170 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2476,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:43:55.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-d1025b7e-6997-4655-8474-4b9a82f80f6d
STEP: Creating a pod to test consume secrets
Aug 17 22:43:56.050: INFO: Waiting up to 5m0s for pod "pod-secrets-fbaaa679-677f-40d4-8df4-aa4b650291ad" in namespace "secrets-4788" to be "success or failure"
Aug 17 22:43:56.166: INFO: Pod "pod-secrets-fbaaa679-677f-40d4-8df4-aa4b650291ad": Phase="Pending", Reason="", readiness=false. Elapsed: 115.992675ms
Aug 17 22:43:58.184: INFO: Pod "pod-secrets-fbaaa679-677f-40d4-8df4-aa4b650291ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134080259s
Aug 17 22:44:00.192: INFO: Pod "pod-secrets-fbaaa679-677f-40d4-8df4-aa4b650291ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1415513s
STEP: Saw pod success
Aug 17 22:44:00.192: INFO: Pod "pod-secrets-fbaaa679-677f-40d4-8df4-aa4b650291ad" satisfied condition "success or failure"
Aug 17 22:44:00.198: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-fbaaa679-677f-40d4-8df4-aa4b650291ad container secret-volume-test: 
STEP: delete the pod
Aug 17 22:44:00.280: INFO: Waiting for pod pod-secrets-fbaaa679-677f-40d4-8df4-aa4b650291ad to disappear
Aug 17 22:44:00.307: INFO: Pod pod-secrets-fbaaa679-677f-40d4-8df4-aa4b650291ad no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:00.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4788" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2486,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:00.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 17 22:44:01.176: INFO: Waiting up to 5m0s for pod "downward-api-b24259ed-6101-49a8-927c-e99c320caca4" in namespace "downward-api-709" to be "success or failure"
Aug 17 22:44:01.254: INFO: Pod "downward-api-b24259ed-6101-49a8-927c-e99c320caca4": Phase="Pending", Reason="", readiness=false. Elapsed: 77.98431ms
Aug 17 22:44:03.262: INFO: Pod "downward-api-b24259ed-6101-49a8-927c-e99c320caca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085775672s
Aug 17 22:44:05.323: INFO: Pod "downward-api-b24259ed-6101-49a8-927c-e99c320caca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146450346s
STEP: Saw pod success
Aug 17 22:44:05.323: INFO: Pod "downward-api-b24259ed-6101-49a8-927c-e99c320caca4" satisfied condition "success or failure"
Aug 17 22:44:05.328: INFO: Trying to get logs from node jerma-worker2 pod downward-api-b24259ed-6101-49a8-927c-e99c320caca4 container dapi-container: 
STEP: delete the pod
Aug 17 22:44:05.408: INFO: Waiting for pod downward-api-b24259ed-6101-49a8-927c-e99c320caca4 to disappear
Aug 17 22:44:05.555: INFO: Pod downward-api-b24259ed-6101-49a8-927c-e99c320caca4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:05.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-709" for this suite.

• [SLOW TEST:5.243 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2548,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:05.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:06.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8914" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":159,"skipped":2578,"failed":0}
SSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:06.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:44:07.599: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-337ca579-f60c-4898-aa8c-1c4bac195488" in namespace "security-context-test-8146" to be "success or failure"
Aug 17 22:44:07.845: INFO: Pod "busybox-privileged-false-337ca579-f60c-4898-aa8c-1c4bac195488": Phase="Pending", Reason="", readiness=false. Elapsed: 245.423404ms
Aug 17 22:44:09.851: INFO: Pod "busybox-privileged-false-337ca579-f60c-4898-aa8c-1c4bac195488": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251414771s
Aug 17 22:44:11.976: INFO: Pod "busybox-privileged-false-337ca579-f60c-4898-aa8c-1c4bac195488": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.376588758s
Aug 17 22:44:11.976: INFO: Pod "busybox-privileged-false-337ca579-f60c-4898-aa8c-1c4bac195488" satisfied condition "success or failure"
Aug 17 22:44:11.989: INFO: Got logs for pod "busybox-privileged-false-337ca579-f60c-4898-aa8c-1c4bac195488": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:11.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8146" for this suite.

• [SLOW TEST:5.026 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2581,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:12.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-8a0367cf-0ecb-46c4-b4a2-71369ed7221b
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:12.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8943" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":161,"skipped":2588,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:12.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:16.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1163" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2596,"failed":0}
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:16.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-7681/configmap-test-51e5d01a-6e28-45ce-ad3a-aee794430537
STEP: Creating a pod to test consume configMaps
Aug 17 22:44:16.872: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd0ed8fd-de91-4739-841f-85a485b17587" in namespace "configmap-7681" to be "success or failure"
Aug 17 22:44:16.902: INFO: Pod "pod-configmaps-cd0ed8fd-de91-4739-841f-85a485b17587": Phase="Pending", Reason="", readiness=false. Elapsed: 29.109776ms
Aug 17 22:44:18.909: INFO: Pod "pod-configmaps-cd0ed8fd-de91-4739-841f-85a485b17587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036067569s
Aug 17 22:44:20.916: INFO: Pod "pod-configmaps-cd0ed8fd-de91-4739-841f-85a485b17587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043502038s
STEP: Saw pod success
Aug 17 22:44:20.917: INFO: Pod "pod-configmaps-cd0ed8fd-de91-4739-841f-85a485b17587" satisfied condition "success or failure"
Aug 17 22:44:20.922: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-cd0ed8fd-de91-4739-841f-85a485b17587 container env-test: 
STEP: delete the pod
Aug 17 22:44:21.077: INFO: Waiting for pod pod-configmaps-cd0ed8fd-de91-4739-841f-85a485b17587 to disappear
Aug 17 22:44:21.099: INFO: Pod pod-configmaps-cd0ed8fd-de91-4739-841f-85a485b17587 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:21.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7681" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2597,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:21.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 17 22:44:21.393: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-watch-closed 57107966-6f2a-442d-b16f-0d4fe9980f08 891060 0 2020-08-17 22:44:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 17 22:44:21.394: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-watch-closed 57107966-6f2a-442d-b16f-0d4fe9980f08 891061 0 2020-08-17 22:44:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 17 22:44:21.450: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-watch-closed 57107966-6f2a-442d-b16f-0d4fe9980f08 891063 0 2020-08-17 22:44:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 17 22:44:21.451: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-watch-closed 57107966-6f2a-442d-b16f-0d4fe9980f08 891066 0 2020-08-17 22:44:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:21.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4042" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":164,"skipped":2603,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:21.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4778
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-4778
I0817 22:44:21.693162       7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4778, replica count: 2
I0817 22:44:24.748121       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 22:44:27.748909       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 22:44:30.749710       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 17 22:44:30.750: INFO: Creating new exec pod
Aug 17 22:44:37.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4778 execpod967ch -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 17 22:44:39.298: INFO: stderr: "I0817 22:44:39.170260    3534 log.go:172] (0x4000aca000) (0x400095a000) Create stream\nI0817 22:44:39.175588    3534 log.go:172] (0x4000aca000) (0x400095a000) Stream added, broadcasting: 1\nI0817 22:44:39.187225    3534 log.go:172] (0x4000aca000) Reply frame received for 1\nI0817 22:44:39.187771    3534 log.go:172] (0x4000aca000) (0x400095a0a0) Create stream\nI0817 22:44:39.187827    3534 log.go:172] (0x4000aca000) (0x400095a0a0) Stream added, broadcasting: 3\nI0817 22:44:39.189395    3534 log.go:172] (0x4000aca000) Reply frame received for 3\nI0817 22:44:39.189735    3534 log.go:172] (0x4000aca000) (0x400095a140) Create stream\nI0817 22:44:39.189810    3534 log.go:172] (0x4000aca000) (0x400095a140) Stream added, broadcasting: 5\nI0817 22:44:39.191103    3534 log.go:172] (0x4000aca000) Reply frame received for 5\nI0817 22:44:39.277893    3534 log.go:172] (0x4000aca000) Data frame received for 5\nI0817 22:44:39.278388    3534 log.go:172] (0x4000aca000) Data frame received for 3\nI0817 22:44:39.278601    3534 log.go:172] (0x400095a0a0) (3) Data frame handling\nI0817 22:44:39.278748    3534 log.go:172] (0x4000aca000) Data frame received for 1\nI0817 22:44:39.278922    3534 log.go:172] (0x400095a000) (1) Data frame handling\nI0817 22:44:39.279163    3534 log.go:172] (0x400095a140) (5) Data frame handling\nI0817 22:44:39.281822    3534 log.go:172] (0x400095a140) (5) Data frame sent\nI0817 22:44:39.282057    3534 log.go:172] (0x4000aca000) Data frame received for 5\nI0817 22:44:39.282163    3534 log.go:172] (0x400095a140) (5) Data frame handling\nI0817 22:44:39.282366    3534 log.go:172] (0x400095a000) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0817 22:44:39.283603    3534 log.go:172] (0x4000aca000) (0x400095a000) Stream removed, broadcasting: 1\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0817 22:44:39.287018    3534 log.go:172] (0x400095a140) (5) Data frame sent\nI0817 22:44:39.287152    3534 log.go:172] (0x4000aca000) Data frame received for 5\nI0817 22:44:39.287245    3534 log.go:172] (0x400095a140) (5) Data frame handling\nI0817 22:44:39.287556    3534 log.go:172] (0x4000aca000) Go away received\nI0817 22:44:39.289515    3534 log.go:172] (0x4000aca000) (0x400095a000) Stream removed, broadcasting: 1\nI0817 22:44:39.289997    3534 log.go:172] (0x4000aca000) (0x400095a0a0) Stream removed, broadcasting: 3\nI0817 22:44:39.290470    3534 log.go:172] (0x4000aca000) (0x400095a140) Stream removed, broadcasting: 5\n"
Aug 17 22:44:39.299: INFO: stdout: ""
Aug 17 22:44:39.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4778 execpod967ch -- /bin/sh -x -c nc -zv -t -w 2 10.101.73.161 80'
Aug 17 22:44:40.724: INFO: stderr: "I0817 22:44:40.637225    3558 log.go:172] (0x4000a84a50) (0x400083d9a0) Create stream\nI0817 22:44:40.639521    3558 log.go:172] (0x4000a84a50) (0x400083d9a0) Stream added, broadcasting: 1\nI0817 22:44:40.649741    3558 log.go:172] (0x4000a84a50) Reply frame received for 1\nI0817 22:44:40.650891    3558 log.go:172] (0x4000a84a50) (0x400083db80) Create stream\nI0817 22:44:40.651009    3558 log.go:172] (0x4000a84a50) (0x400083db80) Stream added, broadcasting: 3\nI0817 22:44:40.652960    3558 log.go:172] (0x4000a84a50) Reply frame received for 3\nI0817 22:44:40.653468    3558 log.go:172] (0x4000a84a50) (0x400083dc20) Create stream\nI0817 22:44:40.653585    3558 log.go:172] (0x4000a84a50) (0x400083dc20) Stream added, broadcasting: 5\nI0817 22:44:40.655062    3558 log.go:172] (0x4000a84a50) Reply frame received for 5\nI0817 22:44:40.707783    3558 log.go:172] (0x4000a84a50) Data frame received for 5\nI0817 22:44:40.708147    3558 log.go:172] (0x4000a84a50) Data frame received for 3\nI0817 22:44:40.708279    3558 log.go:172] (0x400083db80) (3) Data frame handling\nI0817 22:44:40.708367    3558 log.go:172] (0x400083dc20) (5) Data frame handling\nI0817 22:44:40.709922    3558 log.go:172] (0x4000a84a50) Data frame received for 1\nI0817 22:44:40.710039    3558 log.go:172] (0x400083d9a0) (1) Data frame handling\nI0817 22:44:40.710113    3558 log.go:172] (0x400083dc20) (5) Data frame sent\nI0817 22:44:40.710283    3558 log.go:172] (0x400083d9a0) (1) Data frame sent\n+ nc -zv -t -w 2 10.101.73.161 80\nConnection to 10.101.73.161 80 port [tcp/http] succeeded!\nI0817 22:44:40.710897    3558 log.go:172] (0x4000a84a50) Data frame received for 5\nI0817 22:44:40.710996    3558 log.go:172] (0x400083dc20) (5) Data frame handling\nI0817 22:44:40.711889    3558 log.go:172] (0x4000a84a50) (0x400083d9a0) Stream removed, broadcasting: 1\nI0817 22:44:40.714208    3558 log.go:172] (0x4000a84a50) Go away received\nI0817 22:44:40.716514    3558 log.go:172] (0x4000a84a50) (0x400083d9a0) Stream removed, broadcasting: 1\nI0817 22:44:40.717251    3558 log.go:172] (0x4000a84a50) (0x400083db80) Stream removed, broadcasting: 3\nI0817 22:44:40.717482    3558 log.go:172] (0x4000a84a50) (0x400083dc20) Stream removed, broadcasting: 5\n"
Aug 17 22:44:40.725: INFO: stdout: ""
Aug 17 22:44:40.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4778 execpod967ch -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30813'
Aug 17 22:44:42.196: INFO: stderr: "I0817 22:44:42.096640    3581 log.go:172] (0x4000a0a000) (0x4000ab4000) Create stream\nI0817 22:44:42.099168    3581 log.go:172] (0x4000a0a000) (0x4000ab4000) Stream added, broadcasting: 1\nI0817 22:44:42.111263    3581 log.go:172] (0x4000a0a000) Reply frame received for 1\nI0817 22:44:42.112407    3581 log.go:172] (0x4000a0a000) (0x4000a02000) Create stream\nI0817 22:44:42.112511    3581 log.go:172] (0x4000a0a000) (0x4000a02000) Stream added, broadcasting: 3\nI0817 22:44:42.114310    3581 log.go:172] (0x4000a0a000) Reply frame received for 3\nI0817 22:44:42.114582    3581 log.go:172] (0x4000a0a000) (0x40007a7ae0) Create stream\nI0817 22:44:42.114641    3581 log.go:172] (0x4000a0a000) (0x40007a7ae0) Stream added, broadcasting: 5\nI0817 22:44:42.115931    3581 log.go:172] (0x4000a0a000) Reply frame received for 5\nI0817 22:44:42.175579    3581 log.go:172] (0x4000a0a000) Data frame received for 3\nI0817 22:44:42.176244    3581 log.go:172] (0x4000a0a000) Data frame received for 5\nI0817 22:44:42.176463    3581 log.go:172] (0x40007a7ae0) (5) Data frame handling\nI0817 22:44:42.177021    3581 log.go:172] (0x4000a0a000) Data frame received for 1\nI0817 22:44:42.177142    3581 log.go:172] (0x4000ab4000) (1) Data frame handling\nI0817 22:44:42.177208    3581 log.go:172] (0x4000a02000) (3) Data frame handling\nI0817 22:44:42.177646    3581 log.go:172] (0x4000ab4000) (1) Data frame sent\nI0817 22:44:42.178591    3581 log.go:172] (0x40007a7ae0) (5) Data frame sent\nI0817 22:44:42.178685    3581 log.go:172] (0x4000a0a000) Data frame received for 5\nI0817 22:44:42.178748    3581 log.go:172] (0x40007a7ae0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 30813\nConnection to 172.18.0.6 30813 port [tcp/30813] succeeded!\nI0817 22:44:42.181020    3581 log.go:172] (0x4000a0a000) (0x4000ab4000) Stream removed, broadcasting: 1\nI0817 22:44:42.183028    3581 log.go:172] (0x4000a0a000) Go away received\nI0817 22:44:42.187232    3581 log.go:172] (0x4000a0a000) (0x4000ab4000) Stream removed, broadcasting: 1\nI0817 22:44:42.187524    3581 log.go:172] (0x4000a0a000) (0x4000a02000) Stream removed, broadcasting: 3\nI0817 22:44:42.187716    3581 log.go:172] (0x4000a0a000) (0x40007a7ae0) Stream removed, broadcasting: 5\n"
Aug 17 22:44:42.197: INFO: stdout: ""
Aug 17 22:44:42.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4778 execpod967ch -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 30813'
Aug 17 22:44:43.649: INFO: stderr: "I0817 22:44:43.535307    3605 log.go:172] (0x4000a58b00) (0x40007921e0) Create stream\nI0817 22:44:43.537695    3605 log.go:172] (0x4000a58b00) (0x40007921e0) Stream added, broadcasting: 1\nI0817 22:44:43.551223    3605 log.go:172] (0x4000a58b00) Reply frame received for 1\nI0817 22:44:43.551748    3605 log.go:172] (0x4000a58b00) (0x400058f400) Create stream\nI0817 22:44:43.551814    3605 log.go:172] (0x4000a58b00) (0x400058f400) Stream added, broadcasting: 3\nI0817 22:44:43.552982    3605 log.go:172] (0x4000a58b00) Reply frame received for 3\nI0817 22:44:43.553194    3605 log.go:172] (0x4000a58b00) (0x40007920a0) Create stream\nI0817 22:44:43.553251    3605 log.go:172] (0x4000a58b00) (0x40007920a0) Stream added, broadcasting: 5\nI0817 22:44:43.554418    3605 log.go:172] (0x4000a58b00) Reply frame received for 5\nI0817 22:44:43.625588    3605 log.go:172] (0x4000a58b00) Data frame received for 5\nI0817 22:44:43.625840    3605 log.go:172] (0x4000a58b00) Data frame received for 3\nI0817 22:44:43.625948    3605 log.go:172] (0x40007920a0) (5) Data frame handling\nI0817 22:44:43.626165    3605 log.go:172] (0x400058f400) (3) Data frame handling\nI0817 22:44:43.626613    3605 log.go:172] (0x40007920a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 30813\nI0817 22:44:43.627015    3605 log.go:172] (0x4000a58b00) Data frame received for 1\nI0817 22:44:43.627103    3605 log.go:172] (0x40007921e0) (1) Data frame handling\nI0817 22:44:43.627194    3605 log.go:172] (0x40007921e0) (1) Data frame sent\nI0817 22:44:43.628125    3605 log.go:172] (0x4000a58b00) Data frame received for 5\nI0817 22:44:43.628205    3605 log.go:172] (0x40007920a0) (5) Data frame handling\nConnection to 172.18.0.3 30813 port [tcp/30813] succeeded!\nI0817 22:44:43.628327    3605 log.go:172] (0x40007920a0) (5) Data frame sent\nI0817 22:44:43.628453    3605 log.go:172] (0x4000a58b00) Data frame received for 5\nI0817 22:44:43.628522    3605 log.go:172] (0x40007920a0) (5) Data frame handling\nI0817 22:44:43.629814    3605 log.go:172] (0x4000a58b00) (0x40007921e0) Stream removed, broadcasting: 1\nI0817 22:44:43.632828    3605 log.go:172] (0x4000a58b00) Go away received\nI0817 22:44:43.638775    3605 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0x400058f400), 0x5:(*spdystream.Stream)(0x40007920a0)}\nI0817 22:44:43.638991    3605 log.go:172] (0x4000a58b00) (0x40007921e0) Stream removed, broadcasting: 1\nI0817 22:44:43.639305    3605 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0x400058f400), 0x5:(*spdystream.Stream)(0x40007920a0)}\nI0817 22:44:43.639694    3605 log.go:172] (0x4000a58b00) (0x400058f400) Stream removed, broadcasting: 3\nI0817 22:44:43.640055    3605 log.go:172] (0x4000a58b00) (0x40007920a0) Stream removed, broadcasting: 5\n"
Aug 17 22:44:43.649: INFO: stdout: ""
Aug 17 22:44:43.650: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:43.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4778" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.485 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":165,"skipped":2607,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:43.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:44:44.413: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9104a4d8-9fcc-43d4-b43b-d71c67044641", Controller:(*bool)(0x40054fc6ba), BlockOwnerDeletion:(*bool)(0x40054fc6bb)}}
Aug 17 22:44:44.753: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d849160f-11bf-4108-8e5a-84afa9026968", Controller:(*bool)(0x4005dfe20a), BlockOwnerDeletion:(*bool)(0x4005dfe20b)}}
Aug 17 22:44:44.836: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2c38f9a2-2b59-4678-9c0a-c2994b83ab02", Controller:(*bool)(0x40054fc87a), BlockOwnerDeletion:(*bool)(0x40054fc87b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:50.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7004" for this suite.

• [SLOW TEST:6.391 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":166,"skipped":2620,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:50.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 22:44:51.090: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195" in namespace "downward-api-1290" to be "success or failure"
Aug 17 22:44:51.159: INFO: Pod "downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195": Phase="Pending", Reason="", readiness=false. Elapsed: 69.247747ms
Aug 17 22:44:53.166: INFO: Pod "downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076094828s
Aug 17 22:44:55.173: INFO: Pod "downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082746986s
Aug 17 22:44:57.446: INFO: Pod "downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195": Phase="Running", Reason="", readiness=true. Elapsed: 6.355555163s
Aug 17 22:44:59.453: INFO: Pod "downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.363076343s
STEP: Saw pod success
Aug 17 22:44:59.453: INFO: Pod "downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195" satisfied condition "success or failure"
Aug 17 22:44:59.458: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195 container client-container: 
STEP: delete the pod
Aug 17 22:44:59.604: INFO: Waiting for pod downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195 to disappear
Aug 17 22:44:59.650: INFO: Pod downwardapi-volume-69374f07-46b4-415d-8772-aac8b5c83195 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:44:59.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1290" for this suite.

• [SLOW TEST:9.319 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2621,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:44:59.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 17 22:45:08.328: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 17 22:45:08.376: INFO: Pod pod-with-prestop-http-hook still exists
Aug 17 22:45:10.376: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 17 22:45:10.383: INFO: Pod pod-with-prestop-http-hook still exists
Aug 17 22:45:12.376: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 17 22:45:12.383: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:45:12.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1448" for this suite.

• [SLOW TEST:12.739 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2636,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:45:12.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:45:16.735: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:45:18.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301116, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301116, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301116, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301116, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:45:20.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301116, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301116, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301116, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301116, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:45:23.876: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:45:34.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1631" for this suite.
STEP: Destroying namespace "webhook-1631-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.300 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":169,"skipped":2652,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:45:34.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:46:13.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2854" for this suite.
STEP: Destroying namespace "nsdeletetest-7534" for this suite.
Aug 17 22:46:14.165: INFO: Namespace nsdeletetest-7534 was already deleted
STEP: Destroying namespace "nsdeletetest-2703" for this suite.

• [SLOW TEST:39.498 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":170,"skipped":2677,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:46:14.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-4e6495aa-1007-4d28-aa55-c3071b38bb5b
STEP: Creating a pod to test consume secrets
Aug 17 22:46:14.416: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b41b197d-a016-4618-a4aa-a629028b3735" in namespace "projected-2352" to be "success or failure"
Aug 17 22:46:14.519: INFO: Pod "pod-projected-secrets-b41b197d-a016-4618-a4aa-a629028b3735": Phase="Pending", Reason="", readiness=false. Elapsed: 102.855856ms
Aug 17 22:46:16.982: INFO: Pod "pod-projected-secrets-b41b197d-a016-4618-a4aa-a629028b3735": Phase="Pending", Reason="", readiness=false. Elapsed: 2.565177282s
Aug 17 22:46:18.986: INFO: Pod "pod-projected-secrets-b41b197d-a016-4618-a4aa-a629028b3735": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.569282018s
STEP: Saw pod success
Aug 17 22:46:18.986: INFO: Pod "pod-projected-secrets-b41b197d-a016-4618-a4aa-a629028b3735" satisfied condition "success or failure"
Aug 17 22:46:19.063: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-b41b197d-a016-4618-a4aa-a629028b3735 container projected-secret-volume-test: 
STEP: delete the pod
Aug 17 22:46:19.298: INFO: Waiting for pod pod-projected-secrets-b41b197d-a016-4618-a4aa-a629028b3735 to disappear
Aug 17 22:46:19.301: INFO: Pod pod-projected-secrets-b41b197d-a016-4618-a4aa-a629028b3735 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:46:19.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2352" for this suite.

• [SLOW TEST:5.285 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2682,"failed":0}
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:46:19.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Aug 17 22:46:19.634: INFO: Waiting up to 5m0s for pod "var-expansion-cbea7ec7-0ac4-4638-bde1-15bd12b60bf1" in namespace "var-expansion-4300" to be "success or failure"
Aug 17 22:46:19.643: INFO: Pod "var-expansion-cbea7ec7-0ac4-4638-bde1-15bd12b60bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472972ms
Aug 17 22:46:21.663: INFO: Pod "var-expansion-cbea7ec7-0ac4-4638-bde1-15bd12b60bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028507094s
Aug 17 22:46:23.670: INFO: Pod "var-expansion-cbea7ec7-0ac4-4638-bde1-15bd12b60bf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035368815s
STEP: Saw pod success
Aug 17 22:46:23.670: INFO: Pod "var-expansion-cbea7ec7-0ac4-4638-bde1-15bd12b60bf1" satisfied condition "success or failure"
Aug 17 22:46:23.675: INFO: Trying to get logs from node jerma-worker pod var-expansion-cbea7ec7-0ac4-4638-bde1-15bd12b60bf1 container dapi-container: 
STEP: delete the pod
Aug 17 22:46:23.739: INFO: Waiting for pod var-expansion-cbea7ec7-0ac4-4638-bde1-15bd12b60bf1 to disappear
Aug 17 22:46:23.815: INFO: Pod var-expansion-cbea7ec7-0ac4-4638-bde1-15bd12b60bf1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:46:23.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4300" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2682,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:46:23.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 17 22:46:24.525: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:24.578: INFO: Number of nodes with available pods: 0
Aug 17 22:46:24.578: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 22:46:26.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:26.552: INFO: Number of nodes with available pods: 0
Aug 17 22:46:26.552: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 22:46:26.740: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:26.745: INFO: Number of nodes with available pods: 0
Aug 17 22:46:26.745: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 22:46:28.025: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:28.030: INFO: Number of nodes with available pods: 0
Aug 17 22:46:28.030: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 22:46:28.753: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:28.797: INFO: Number of nodes with available pods: 0
Aug 17 22:46:28.797: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 22:46:29.667: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:29.712: INFO: Number of nodes with available pods: 0
Aug 17 22:46:29.712: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 22:46:30.630: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:30.636: INFO: Number of nodes with available pods: 1
Aug 17 22:46:30.636: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 22:46:31.587: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:31.593: INFO: Number of nodes with available pods: 2
Aug 17 22:46:31.593: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 17 22:46:31.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:31.645: INFO: Number of nodes with available pods: 1
Aug 17 22:46:31.645: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:32.658: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:32.663: INFO: Number of nodes with available pods: 1
Aug 17 22:46:32.663: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:33.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:33.907: INFO: Number of nodes with available pods: 1
Aug 17 22:46:33.907: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:34.654: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:34.660: INFO: Number of nodes with available pods: 1
Aug 17 22:46:34.660: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:35.659: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:35.665: INFO: Number of nodes with available pods: 1
Aug 17 22:46:35.665: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:36.656: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:36.666: INFO: Number of nodes with available pods: 1
Aug 17 22:46:36.666: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:37.654: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:37.660: INFO: Number of nodes with available pods: 1
Aug 17 22:46:37.660: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:38.654: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:38.662: INFO: Number of nodes with available pods: 1
Aug 17 22:46:38.662: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:39.717: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:39.803: INFO: Number of nodes with available pods: 1
Aug 17 22:46:39.804: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:40.656: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:40.663: INFO: Number of nodes with available pods: 1
Aug 17 22:46:40.663: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:41.779: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:42.199: INFO: Number of nodes with available pods: 1
Aug 17 22:46:42.199: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:42.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:42.916: INFO: Number of nodes with available pods: 1
Aug 17 22:46:42.916: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:43.653: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:43.657: INFO: Number of nodes with available pods: 1
Aug 17 22:46:43.657: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:44.656: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:44.662: INFO: Number of nodes with available pods: 1
Aug 17 22:46:44.662: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:45.695: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:46.059: INFO: Number of nodes with available pods: 1
Aug 17 22:46:46.059: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:46.710: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:46.891: INFO: Number of nodes with available pods: 1
Aug 17 22:46:46.891: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 17 22:46:47.656: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 22:46:47.661: INFO: Number of nodes with available pods: 2
Aug 17 22:46:47.661: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-441, will wait for the garbage collector to delete the pods
Aug 17 22:46:48.204: INFO: Deleting DaemonSet.extensions daemon-set took: 486.853541ms
Aug 17 22:46:49.505: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.300899471s
Aug 17 22:46:56.422: INFO: Number of nodes with available pods: 0
Aug 17 22:46:56.423: INFO: Number of running nodes: 0, number of available pods: 0
Aug 17 22:46:56.849: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-441/daemonsets","resourceVersion":"891943"},"items":null}

Aug 17 22:46:56.920: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-441/pods","resourceVersion":"891943"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:46:58.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-441" for this suite.

• [SLOW TEST:34.897 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":173,"skipped":2692,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:46:58.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:47:02.898: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:47:04.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301222, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301222, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301222, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301222, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:47:07.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301222, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301222, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301222, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301222, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:47:09.986: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:47:10.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8770" for this suite.
STEP: Destroying namespace "webhook-8770-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.664 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":174,"skipped":2701,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:47:11.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 17 22:47:11.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 17 22:48:09.444: INFO: >>> kubeConfig: /root/.kube/config
Aug 17 22:48:28.519: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:49:27.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7205" for this suite.

• [SLOW TEST:135.845 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":175,"skipped":2708,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:49:27.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-ffb82d77-2947-419b-89d8-fbface07709c
STEP: Creating a pod to test consume secrets
Aug 17 22:49:27.313: INFO: Waiting up to 5m0s for pod "pod-secrets-981ec97f-d70f-4f86-b200-da592332d5bf" in namespace "secrets-3154" to be "success or failure"
Aug 17 22:49:27.317: INFO: Pod "pod-secrets-981ec97f-d70f-4f86-b200-da592332d5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.914069ms
Aug 17 22:49:29.395: INFO: Pod "pod-secrets-981ec97f-d70f-4f86-b200-da592332d5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081960578s
Aug 17 22:49:31.403: INFO: Pod "pod-secrets-981ec97f-d70f-4f86-b200-da592332d5bf": Phase="Running", Reason="", readiness=true. Elapsed: 4.089334349s
Aug 17 22:49:33.411: INFO: Pod "pod-secrets-981ec97f-d70f-4f86-b200-da592332d5bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097333086s
STEP: Saw pod success
Aug 17 22:49:33.411: INFO: Pod "pod-secrets-981ec97f-d70f-4f86-b200-da592332d5bf" satisfied condition "success or failure"
Aug 17 22:49:33.416: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-981ec97f-d70f-4f86-b200-da592332d5bf container secret-volume-test: 
STEP: delete the pod
Aug 17 22:49:33.462: INFO: Waiting for pod pod-secrets-981ec97f-d70f-4f86-b200-da592332d5bf to disappear
Aug 17 22:49:33.473: INFO: Pod pod-secrets-981ec97f-d70f-4f86-b200-da592332d5bf no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:49:33.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3154" for this suite.

• [SLOW TEST:6.245 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2731,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:49:33.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2705.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2705.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2705.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2705.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2705.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2705.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 17 22:49:41.695: INFO: DNS probes using dns-2705/dns-test-cd7a03f4-41c2-4282-a42d-5634398467c0 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:49:41.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2705" for this suite.

• [SLOW TEST:8.306 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":177,"skipped":2733,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:49:41.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Aug 17 22:49:42.399: INFO: Waiting up to 5m0s for pod "client-containers-75cedcef-3c85-4b62-a2e0-0cf42e97d73f" in namespace "containers-9292" to be "success or failure"
Aug 17 22:49:42.452: INFO: Pod "client-containers-75cedcef-3c85-4b62-a2e0-0cf42e97d73f": Phase="Pending", Reason="", readiness=false. Elapsed: 52.269501ms
Aug 17 22:49:44.458: INFO: Pod "client-containers-75cedcef-3c85-4b62-a2e0-0cf42e97d73f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057975961s
Aug 17 22:49:46.462: INFO: Pod "client-containers-75cedcef-3c85-4b62-a2e0-0cf42e97d73f": Phase="Running", Reason="", readiness=true. Elapsed: 4.062838434s
Aug 17 22:49:48.470: INFO: Pod "client-containers-75cedcef-3c85-4b62-a2e0-0cf42e97d73f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070702078s
STEP: Saw pod success
Aug 17 22:49:48.471: INFO: Pod "client-containers-75cedcef-3c85-4b62-a2e0-0cf42e97d73f" satisfied condition "success or failure"
Aug 17 22:49:48.475: INFO: Trying to get logs from node jerma-worker2 pod client-containers-75cedcef-3c85-4b62-a2e0-0cf42e97d73f container test-container: 
STEP: delete the pod
Aug 17 22:49:48.508: INFO: Waiting for pod client-containers-75cedcef-3c85-4b62-a2e0-0cf42e97d73f to disappear
Aug 17 22:49:48.543: INFO: Pod client-containers-75cedcef-3c85-4b62-a2e0-0cf42e97d73f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:49:48.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9292" for this suite.

• [SLOW TEST:6.762 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2740,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:49:48.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9239.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9239.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 26.209.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.209.26_udp@PTR;check="$$(dig +tcp +noall +answer +search 26.209.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.209.26_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9239.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9239.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 26.209.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.209.26_udp@PTR;check="$$(dig +tcp +noall +answer +search 26.209.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.209.26_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 17 22:49:56.733: INFO: Unable to read wheezy_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:49:56.738: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:49:56.741: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:49:56.745: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:49:56.773: INFO: Unable to read jessie_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:49:56.778: INFO: Unable to read jessie_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:49:56.813: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:49:56.817: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:49:56.853: INFO: Lookups using dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340 failed for: [wheezy_udp@dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_udp@dns-test-service.dns-9239.svc.cluster.local jessie_tcp@dns-test-service.dns-9239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local]

Aug 17 22:50:01.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:01.865: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:01.869: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:01.874: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:01.902: INFO: Unable to read jessie_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:01.905: INFO: Unable to read jessie_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:01.909: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:01.912: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:01.934: INFO: Lookups using dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340 failed for: [wheezy_udp@dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_udp@dns-test-service.dns-9239.svc.cluster.local jessie_tcp@dns-test-service.dns-9239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local]

Aug 17 22:50:06.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:06.865: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:06.870: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:06.875: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:06.930: INFO: Unable to read jessie_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:06.934: INFO: Unable to read jessie_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:06.937: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:06.941: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:06.966: INFO: Lookups using dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340 failed for: [wheezy_udp@dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_udp@dns-test-service.dns-9239.svc.cluster.local jessie_tcp@dns-test-service.dns-9239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local]

Aug 17 22:50:11.870: INFO: Unable to read wheezy_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:11.875: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:12.120: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:12.213: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:12.559: INFO: Unable to read jessie_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:12.563: INFO: Unable to read jessie_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:12.567: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:12.572: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:12.605: INFO: Lookups using dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340 failed for: [wheezy_udp@dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_udp@dns-test-service.dns-9239.svc.cluster.local jessie_tcp@dns-test-service.dns-9239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local]

Aug 17 22:50:16.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:16.865: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:16.870: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:16.874: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:16.942: INFO: Unable to read jessie_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:16.947: INFO: Unable to read jessie_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:16.949: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:16.957: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:16.972: INFO: Lookups using dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340 failed for: [wheezy_udp@dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_udp@dns-test-service.dns-9239.svc.cluster.local jessie_tcp@dns-test-service.dns-9239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local]

Aug 17 22:50:21.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:21.865: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:21.870: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:21.876: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:21.899: INFO: Unable to read jessie_udp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:21.902: INFO: Unable to read jessie_tcp@dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:21.906: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:21.909: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local from pod dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340: the server could not find the requested resource (get pods dns-test-2d8b47df-9ee2-41d7-aedd-280469021340)
Aug 17 22:50:21.933: INFO: Lookups using dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340 failed for: [wheezy_udp@dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@dns-test-service.dns-9239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_udp@dns-test-service.dns-9239.svc.cluster.local jessie_tcp@dns-test-service.dns-9239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9239.svc.cluster.local]

Aug 17 22:50:26.953: INFO: DNS probes using dns-9239/dns-test-2d8b47df-9ee2-41d7-aedd-280469021340 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:50:27.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9239" for this suite.

• [SLOW TEST:38.999 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":179,"skipped":2742,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:50:27.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:50:33.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9611" for this suite.

• [SLOW TEST:6.376 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2750,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:50:33.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:50:34.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 17 22:50:44.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-935 create -f -'
Aug 17 22:50:51.577: INFO: stderr: ""
Aug 17 22:50:51.577: INFO: stdout: "e2e-test-crd-publish-openapi-9240-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 17 22:50:51.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-935 delete e2e-test-crd-publish-openapi-9240-crds test-foo'
Aug 17 22:50:52.864: INFO: stderr: ""
Aug 17 22:50:52.864: INFO: stdout: "e2e-test-crd-publish-openapi-9240-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 17 22:50:52.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-935 apply -f -'
Aug 17 22:50:54.431: INFO: stderr: ""
Aug 17 22:50:54.432: INFO: stdout: "e2e-test-crd-publish-openapi-9240-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 17 22:50:54.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-935 delete e2e-test-crd-publish-openapi-9240-crds test-foo'
Aug 17 22:50:55.729: INFO: stderr: ""
Aug 17 22:50:55.729: INFO: stdout: "e2e-test-crd-publish-openapi-9240-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 17 22:50:55.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-935 create -f -'
Aug 17 22:50:57.233: INFO: rc: 1
Aug 17 22:50:57.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-935 apply -f -'
Aug 17 22:50:58.717: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 17 22:50:58.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-935 create -f -'
Aug 17 22:51:00.448: INFO: rc: 1
Aug 17 22:51:00.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-935 apply -f -'
Aug 17 22:51:02.192: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 17 22:51:02.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9240-crds'
Aug 17 22:51:03.704: INFO: stderr: ""
Aug 17 22:51:03.705: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9240-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 17 22:51:03.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9240-crds.metadata'
Aug 17 22:51:05.253: INFO: stderr: ""
Aug 17 22:51:05.253: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9240-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 17 22:51:05.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9240-crds.spec'
Aug 17 22:51:06.793: INFO: stderr: ""
Aug 17 22:51:06.793: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9240-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 17 22:51:06.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9240-crds.spec.bars'
Aug 17 22:51:08.298: INFO: stderr: ""
Aug 17 22:51:08.298: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9240-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 17 22:51:08.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9240-crds.spec.bars2'
Aug 17 22:51:09.965: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:51:28.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-935" for this suite.

• [SLOW TEST:54.865 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":181,"skipped":2763,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:51:28.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-0b3febb0-dc3b-4cb4-97c7-34d4c7c07264
STEP: Creating a pod to test consume secrets
Aug 17 22:51:29.586: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-357b419a-2d07-4725-a144-69c8eb8869b9" in namespace "projected-8255" to be "success or failure"
Aug 17 22:51:29.627: INFO: Pod "pod-projected-secrets-357b419a-2d07-4725-a144-69c8eb8869b9": Phase="Pending", Reason="", readiness=false. Elapsed: 40.337279ms
Aug 17 22:51:31.635: INFO: Pod "pod-projected-secrets-357b419a-2d07-4725-a144-69c8eb8869b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048780716s
Aug 17 22:51:33.643: INFO: Pod "pod-projected-secrets-357b419a-2d07-4725-a144-69c8eb8869b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056332661s
Aug 17 22:51:35.650: INFO: Pod "pod-projected-secrets-357b419a-2d07-4725-a144-69c8eb8869b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06287815s
STEP: Saw pod success
Aug 17 22:51:35.650: INFO: Pod "pod-projected-secrets-357b419a-2d07-4725-a144-69c8eb8869b9" satisfied condition "success or failure"
Aug 17 22:51:35.655: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-357b419a-2d07-4725-a144-69c8eb8869b9 container projected-secret-volume-test: 
STEP: delete the pod
Aug 17 22:51:35.674: INFO: Waiting for pod pod-projected-secrets-357b419a-2d07-4725-a144-69c8eb8869b9 to disappear
Aug 17 22:51:35.678: INFO: Pod pod-projected-secrets-357b419a-2d07-4725-a144-69c8eb8869b9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:51:35.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8255" for this suite.

• [SLOW TEST:6.894 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2771,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:51:35.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:51:35.774: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:51:36.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2835" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":183,"skipped":2775,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:51:37.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0817 22:52:07.180162       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 17 22:52:07.180: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:52:07.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5009" for this suite.

• [SLOW TEST:30.184 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":184,"skipped":2782,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:52:07.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4805
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-4805
I0817 22:52:07.416251       7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4805, replica count: 2
I0817 22:52:10.467959       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 22:52:13.468691       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 17 22:52:13.469: INFO: Creating new exec pod
Aug 17 22:52:20.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4805 execpod86n8p -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 17 22:52:22.315: INFO: stderr: "I0817 22:52:22.200400    3958 log.go:172] (0x4000106370) (0x40009ce140) Create stream\nI0817 22:52:22.206674    3958 log.go:172] (0x4000106370) (0x40009ce140) Stream added, broadcasting: 1\nI0817 22:52:22.218219    3958 log.go:172] (0x4000106370) Reply frame received for 1\nI0817 22:52:22.218813    3958 log.go:172] (0x4000106370) (0x40006f3cc0) Create stream\nI0817 22:52:22.218875    3958 log.go:172] (0x4000106370) (0x40006f3cc0) Stream added, broadcasting: 3\nI0817 22:52:22.220885    3958 log.go:172] (0x4000106370) Reply frame received for 3\nI0817 22:52:22.221264    3958 log.go:172] (0x4000106370) (0x40009ce1e0) Create stream\nI0817 22:52:22.221360    3958 log.go:172] (0x4000106370) (0x40009ce1e0) Stream added, broadcasting: 5\nI0817 22:52:22.223429    3958 log.go:172] (0x4000106370) Reply frame received for 5\nI0817 22:52:22.296089    3958 log.go:172] (0x4000106370) Data frame received for 5\nI0817 22:52:22.297096    3958 log.go:172] (0x4000106370) Data frame received for 3\nI0817 22:52:22.297276    3958 log.go:172] (0x40006f3cc0) (3) Data frame handling\nI0817 22:52:22.297508    3958 log.go:172] (0x40009ce1e0) (5) Data frame handling\nI0817 22:52:22.297877    3958 log.go:172] (0x4000106370) Data frame received for 1\nI0817 22:52:22.297970    3958 log.go:172] (0x40009ce140) (1) Data frame handling\nI0817 22:52:22.298668    3958 log.go:172] (0x40009ce1e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0817 22:52:22.299549    3958 log.go:172] (0x40009ce140) (1) Data frame sent\nI0817 22:52:22.300356    3958 log.go:172] (0x4000106370) Data frame received for 5\nI0817 22:52:22.300425    3958 log.go:172] (0x40009ce1e0) (5) Data frame handling\nI0817 22:52:22.300524    3958 log.go:172] (0x40009ce1e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0817 22:52:22.300616    3958 log.go:172] (0x4000106370) Data frame received for 5\nI0817 22:52:22.300719    3958 log.go:172] (0x40009ce1e0) (5) Data frame handling\nI0817 22:52:22.301437    3958 log.go:172] (0x4000106370) (0x40009ce140) Stream removed, broadcasting: 1\nI0817 22:52:22.303878    3958 log.go:172] (0x4000106370) Go away received\nI0817 22:52:22.306425    3958 log.go:172] (0x4000106370) (0x40009ce140) Stream removed, broadcasting: 1\nI0817 22:52:22.306887    3958 log.go:172] (0x4000106370) (0x40006f3cc0) Stream removed, broadcasting: 3\nI0817 22:52:22.307248    3958 log.go:172] (0x4000106370) (0x40009ce1e0) Stream removed, broadcasting: 5\n"
Aug 17 22:52:22.317: INFO: stdout: ""
Aug 17 22:52:22.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4805 execpod86n8p -- /bin/sh -x -c nc -zv -t -w 2 10.104.85.156 80'
Aug 17 22:52:23.752: INFO: stderr: "I0817 22:52:23.655978    3981 log.go:172] (0x4000a1a000) (0x40006fc000) Create stream\nI0817 22:52:23.659128    3981 log.go:172] (0x4000a1a000) (0x40006fc000) Stream added, broadcasting: 1\nI0817 22:52:23.670864    3981 log.go:172] (0x4000a1a000) Reply frame received for 1\nI0817 22:52:23.671421    3981 log.go:172] (0x4000a1a000) (0x40007f5c20) Create stream\nI0817 22:52:23.671478    3981 log.go:172] (0x4000a1a000) (0x40007f5c20) Stream added, broadcasting: 3\nI0817 22:52:23.672783    3981 log.go:172] (0x4000a1a000) Reply frame received for 3\nI0817 22:52:23.672993    3981 log.go:172] (0x4000a1a000) (0x40006fc0a0) Create stream\nI0817 22:52:23.673042    3981 log.go:172] (0x4000a1a000) (0x40006fc0a0) Stream added, broadcasting: 5\nI0817 22:52:23.674036    3981 log.go:172] (0x4000a1a000) Reply frame received for 5\nI0817 22:52:23.733652    3981 log.go:172] (0x4000a1a000) Data frame received for 5\nI0817 22:52:23.734123    3981 log.go:172] (0x4000a1a000) Data frame received for 3\nI0817 22:52:23.734366    3981 log.go:172] (0x4000a1a000) Data frame received for 1\nI0817 22:52:23.735008    3981 log.go:172] (0x40006fc000) (1) Data frame handling\nI0817 22:52:23.735458    3981 log.go:172] (0x40006fc000) (1) Data frame sent\nI0817 22:52:23.735553    3981 log.go:172] (0x40007f5c20) (3) Data frame handling\nI0817 22:52:23.736510    3981 log.go:172] (0x40006fc0a0) (5) Data frame handling\nI0817 22:52:23.736623    3981 log.go:172] (0x40006fc0a0) (5) Data frame sent\nI0817 22:52:23.736703    3981 log.go:172] (0x4000a1a000) Data frame received for 5\n+ nc -zv -t -w 2 10.104.85.156 80\nConnection to 10.104.85.156 80 port [tcp/http] succeeded!\nI0817 22:52:23.737478    3981 log.go:172] (0x4000a1a000) (0x40006fc000) Stream removed, broadcasting: 1\nI0817 22:52:23.738184    3981 log.go:172] (0x40006fc0a0) (5) Data frame handling\nI0817 22:52:23.740311    3981 log.go:172] (0x4000a1a000) Go away received\nI0817 22:52:23.742632    3981 log.go:172] (0x4000a1a000) (0x40006fc000) Stream removed, broadcasting: 1\nI0817 22:52:23.743033    3981 log.go:172] (0x4000a1a000) (0x40007f5c20) Stream removed, broadcasting: 3\nI0817 22:52:23.743242    3981 log.go:172] (0x4000a1a000) (0x40006fc0a0) Stream removed, broadcasting: 5\n"
Aug 17 22:52:23.753: INFO: stdout: ""
Aug 17 22:52:23.753: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:52:23.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4805" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:16.621 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":185,"skipped":2800,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:52:23.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 17 22:52:23.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:54:09.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9614" for this suite.

• [SLOW TEST:105.482 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":186,"skipped":2811,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:54:09.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6360.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6360.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 17 22:54:17.773: INFO: DNS probes using dns-test-effa3088-af48-49fd-a7b3-baf39e278dcd succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6360.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6360.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 17 22:54:27.983: INFO: File wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local from pod  dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 17 22:54:27.988: INFO: File jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local from pod  dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 17 22:54:27.988: INFO: Lookups using dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 failed for: [wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local]

Aug 17 22:54:33.069: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local from pod dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6: Get https://172.30.12.66:37695/api/v1/namespaces/dns-6360/pods/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6/proxy/results/wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local: stream error: stream ID 5627; INTERNAL_ERROR
Aug 17 22:54:33.077: INFO: File jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local from pod  dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 contains '' instead of 'bar.example.com.'
Aug 17 22:54:33.077: INFO: Lookups using dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 failed for: [wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local]

Aug 17 22:54:38.002: INFO: File wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local from pod  dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 17 22:54:38.006: INFO: File jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local from pod  dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 17 22:54:38.006: INFO: Lookups using dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 failed for: [wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local]

Aug 17 22:54:42.993: INFO: File wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local from pod  dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 17 22:54:42.997: INFO: File jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local from pod  dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 17 22:54:42.997: INFO: Lookups using dns-6360/dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 failed for: [wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local]

Aug 17 22:54:48.123: INFO: DNS probes using dns-test-ec6da00f-3124-493c-a208-fb7f1fb888c6 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6360.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6360.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6360.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6360.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 17 22:54:58.812: INFO: DNS probes using dns-test-db5fd97c-661b-44d8-bcef-996d7a04b0ab succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:54:59.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6360" for this suite.

• [SLOW TEST:50.564 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":187,"skipped":2812,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:54:59.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:55:00.243: INFO: Creating ReplicaSet my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75
Aug 17 22:55:00.265: INFO: Pod name my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75: Found 0 pods out of 1
Aug 17 22:55:05.326: INFO: Pod name my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75: Found 1 pods out of 1
Aug 17 22:55:05.327: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75" is running
Aug 17 22:55:07.349: INFO: Pod "my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75-h7vf5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 22:55:00 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 22:55:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 22:55:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-17 22:55:00 +0000 UTC Reason: Message:}])
Aug 17 22:55:07.350: INFO: Trying to dial the pod
Aug 17 22:55:12.369: INFO: Controller my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75: Got expected result from replica 1 [my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75-h7vf5]: "my-hostname-basic-7cf135d6-e757-4fd3-b412-c73654f3ca75-h7vf5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:55:12.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6862" for this suite.

• [SLOW TEST:12.516 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":188,"skipped":2851,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:55:12.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 17 22:55:13.110: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 17 22:55:18.132: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:55:18.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6572" for this suite.

• [SLOW TEST:6.149 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":189,"skipped":2870,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:55:18.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:56:18.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1564" for this suite.

• [SLOW TEST:60.113 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":2871,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:56:18.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:56:22.611: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:56:25.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:56:27.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:56:29.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301782, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 22:56:32.047: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:56:32.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1494-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:56:33.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3625" for this suite.
STEP: Destroying namespace "webhook-3625-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.750 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":191,"skipped":2871,"failed":0}
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:56:33.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:56:40.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5724" for this suite.
STEP: Destroying namespace "nsdeletetest-3560" for this suite.
Aug 17 22:56:40.417: INFO: Namespace nsdeletetest-3560 was already deleted
STEP: Destroying namespace "nsdeletetest-4211" for this suite.

• [SLOW TEST:7.021 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":192,"skipped":2871,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:56:40.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-87fq
STEP: Creating a pod to test atomic-volume-subpath
Aug 17 22:56:40.540: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-87fq" in namespace "subpath-3792" to be "success or failure"
Aug 17 22:56:40.549: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.939314ms
Aug 17 22:56:42.556: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015246981s
Aug 17 22:56:44.562: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 4.021684597s
Aug 17 22:56:46.569: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 6.028146773s
Aug 17 22:56:48.577: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 8.036866772s
Aug 17 22:56:50.583: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 10.042882381s
Aug 17 22:56:52.590: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 12.04947897s
Aug 17 22:56:54.597: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 14.056727885s
Aug 17 22:56:56.609: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 16.06810719s
Aug 17 22:56:58.616: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 18.075351825s
Aug 17 22:57:00.624: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 20.083467365s
Aug 17 22:57:02.630: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Running", Reason="", readiness=true. Elapsed: 22.089817051s
Aug 17 22:57:04.638: INFO: Pod "pod-subpath-test-configmap-87fq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.097206773s
STEP: Saw pod success
Aug 17 22:57:04.638: INFO: Pod "pod-subpath-test-configmap-87fq" satisfied condition "success or failure"
Aug 17 22:57:04.643: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-87fq container test-container-subpath-configmap-87fq: 
STEP: delete the pod
Aug 17 22:57:04.680: INFO: Waiting for pod pod-subpath-test-configmap-87fq to disappear
Aug 17 22:57:04.731: INFO: Pod pod-subpath-test-configmap-87fq no longer exists
STEP: Deleting pod pod-subpath-test-configmap-87fq
Aug 17 22:57:04.731: INFO: Deleting pod "pod-subpath-test-configmap-87fq" in namespace "subpath-3792"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:57:04.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3792" for this suite.

• [SLOW TEST:24.328 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":193,"skipped":2879,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:57:04.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 22:57:04.866: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b193d36-0ece-481f-96c2-58a798a1cd1f" in namespace "projected-7744" to be "success or failure"
Aug 17 22:57:04.883: INFO: Pod "downwardapi-volume-6b193d36-0ece-481f-96c2-58a798a1cd1f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.785302ms
Aug 17 22:57:06.890: INFO: Pod "downwardapi-volume-6b193d36-0ece-481f-96c2-58a798a1cd1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024096487s
Aug 17 22:57:08.897: INFO: Pod "downwardapi-volume-6b193d36-0ece-481f-96c2-58a798a1cd1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030391877s
STEP: Saw pod success
Aug 17 22:57:08.897: INFO: Pod "downwardapi-volume-6b193d36-0ece-481f-96c2-58a798a1cd1f" satisfied condition "success or failure"
Aug 17 22:57:08.901: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6b193d36-0ece-481f-96c2-58a798a1cd1f container client-container: 
STEP: delete the pod
Aug 17 22:57:08.948: INFO: Waiting for pod downwardapi-volume-6b193d36-0ece-481f-96c2-58a798a1cd1f to disappear
Aug 17 22:57:08.969: INFO: Pod downwardapi-volume-6b193d36-0ece-481f-96c2-58a798a1cd1f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:57:08.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7744" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":2890,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:57:09.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:57:20.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8629" for this suite.

• [SLOW TEST:11.267 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":195,"skipped":2892,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:57:20.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0817 22:57:30.605041       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 17 22:57:30.605: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:57:30.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9491" for this suite.

• [SLOW TEST:10.340 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":196,"skipped":2945,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:57:30.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-26322e42-c858-4262-a4ae-4760f5b53bee
STEP: Creating a pod to test consume configMaps
Aug 17 22:57:30.767: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-44ee1cdb-1e2a-4e4c-b2c9-8f4b6e515d5b" in namespace "projected-266" to be "success or failure"
Aug 17 22:57:30.793: INFO: Pod "pod-projected-configmaps-44ee1cdb-1e2a-4e4c-b2c9-8f4b6e515d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.863679ms
Aug 17 22:57:32.828: INFO: Pod "pod-projected-configmaps-44ee1cdb-1e2a-4e4c-b2c9-8f4b6e515d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060859006s
Aug 17 22:57:34.843: INFO: Pod "pod-projected-configmaps-44ee1cdb-1e2a-4e4c-b2c9-8f4b6e515d5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075587357s
STEP: Saw pod success
Aug 17 22:57:34.843: INFO: Pod "pod-projected-configmaps-44ee1cdb-1e2a-4e4c-b2c9-8f4b6e515d5b" satisfied condition "success or failure"
Aug 17 22:57:35.068: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-44ee1cdb-1e2a-4e4c-b2c9-8f4b6e515d5b container projected-configmap-volume-test: 
STEP: delete the pod
Aug 17 22:57:35.250: INFO: Waiting for pod pod-projected-configmaps-44ee1cdb-1e2a-4e4c-b2c9-8f4b6e515d5b to disappear
Aug 17 22:57:35.260: INFO: Pod pod-projected-configmaps-44ee1cdb-1e2a-4e4c-b2c9-8f4b6e515d5b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:57:35.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-266" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":2988,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:57:35.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 17 22:57:45.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 17 22:57:45.559: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 17 22:57:47.559: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 17 22:57:47.565: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 17 22:57:49.560: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 17 22:57:49.567: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 17 22:57:51.560: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 17 22:57:51.889: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 17 22:57:53.560: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 17 22:57:53.566: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:57:53.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9362" for this suite.

• [SLOW TEST:18.312 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":2993,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:57:53.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-9101f89b-1925-4061-a348-cf1899358442
STEP: Creating a pod to test consume configMaps
Aug 17 22:57:53.680: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-113ee733-a413-41c3-9753-92bcd7769080" in namespace "projected-5718" to be "success or failure"
Aug 17 22:57:53.726: INFO: Pod "pod-projected-configmaps-113ee733-a413-41c3-9753-92bcd7769080": Phase="Pending", Reason="", readiness=false. Elapsed: 46.621515ms
Aug 17 22:57:55.733: INFO: Pod "pod-projected-configmaps-113ee733-a413-41c3-9753-92bcd7769080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052664936s
Aug 17 22:57:57.737: INFO: Pod "pod-projected-configmaps-113ee733-a413-41c3-9753-92bcd7769080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057468158s
STEP: Saw pod success
Aug 17 22:57:57.737: INFO: Pod "pod-projected-configmaps-113ee733-a413-41c3-9753-92bcd7769080" satisfied condition "success or failure"
Aug 17 22:57:57.741: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-113ee733-a413-41c3-9753-92bcd7769080 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 17 22:57:57.786: INFO: Waiting for pod pod-projected-configmaps-113ee733-a413-41c3-9753-92bcd7769080 to disappear
Aug 17 22:57:57.809: INFO: Pod pod-projected-configmaps-113ee733-a413-41c3-9753-92bcd7769080 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:57:57.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5718" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3017,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:57:57.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:57:58.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8319" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":200,"skipped":3023,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:57:58.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Aug 17 22:57:58.479: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 17 22:57:58.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5120'
Aug 17 22:58:00.507: INFO: stderr: ""
Aug 17 22:58:00.508: INFO: stdout: "service/agnhost-slave created\n"
Aug 17 22:58:00.509: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 17 22:58:00.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5120'
Aug 17 22:58:03.044: INFO: stderr: ""
Aug 17 22:58:03.044: INFO: stdout: "service/agnhost-master created\n"
Aug 17 22:58:03.045: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 17 22:58:03.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5120'
Aug 17 22:58:05.104: INFO: stderr: ""
Aug 17 22:58:05.105: INFO: stdout: "service/frontend created\n"
Aug 17 22:58:05.106: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 17 22:58:05.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5120'
Aug 17 22:58:06.685: INFO: stderr: ""
Aug 17 22:58:06.685: INFO: stdout: "deployment.apps/frontend created\n"
Aug 17 22:58:06.686: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 17 22:58:06.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5120'
Aug 17 22:58:08.275: INFO: stderr: ""
Aug 17 22:58:08.275: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 17 22:58:08.276: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 17 22:58:08.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5120'
Aug 17 22:58:10.580: INFO: stderr: ""
Aug 17 22:58:10.581: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 17 22:58:10.581: INFO: Waiting for all frontend pods to be Running.
Aug 17 22:58:15.634: INFO: Waiting for frontend to serve content.
Aug 17 22:58:15.649: INFO: Trying to add a new entry to the guestbook.
Aug 17 22:58:15.662: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 17 22:58:15.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5120'
Aug 17 22:58:17.049: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 17 22:58:17.049: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 17 22:58:17.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5120'
Aug 17 22:58:18.403: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 17 22:58:18.403: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 17 22:58:18.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5120'
Aug 17 22:58:19.653: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 17 22:58:19.654: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 17 22:58:19.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5120'
Aug 17 22:58:20.901: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 17 22:58:20.901: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 17 22:58:20.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5120'
Aug 17 22:58:22.609: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 17 22:58:22.610: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 17 22:58:22.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5120'
Aug 17 22:58:23.870: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 17 22:58:23.870: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:58:23.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5120" for this suite.

• [SLOW TEST:25.732 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":201,"skipped":3045,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:58:24.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Aug 17 22:58:24.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 17 22:58:26.479: INFO: stderr: ""
Aug 17 22:58:26.479: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:58:26.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2921" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":202,"skipped":3084,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:58:26.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 17 22:58:27.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5682'
Aug 17 22:58:28.758: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 17 22:58:28.758: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Aug 17 22:58:28.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5682'
Aug 17 22:58:30.028: INFO: stderr: ""
Aug 17 22:58:30.028: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:58:30.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5682" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":203,"skipped":3093,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:58:30.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:58:46.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6805" for this suite.

• [SLOW TEST:16.723 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":204,"skipped":3093,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:58:46.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 17 22:58:52.909: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2769 PodName:pod-sharedvolume-9bfe7499-c020-482d-a814-0eadd5503486 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 22:58:52.909: INFO: >>> kubeConfig: /root/.kube/config
I0817 22:58:52.973485       7 log.go:172] (0x4002eb2420) (0x40022f3f40) Create stream
I0817 22:58:52.973675       7 log.go:172] (0x4002eb2420) (0x40022f3f40) Stream added, broadcasting: 1
I0817 22:58:52.977667       7 log.go:172] (0x4002eb2420) Reply frame received for 1
I0817 22:58:52.977890       7 log.go:172] (0x4002eb2420) (0x40013fa0a0) Create stream
I0817 22:58:52.978004       7 log.go:172] (0x4002eb2420) (0x40013fa0a0) Stream added, broadcasting: 3
I0817 22:58:52.979413       7 log.go:172] (0x4002eb2420) Reply frame received for 3
I0817 22:58:52.979572       7 log.go:172] (0x4002eb2420) (0x4000c240a0) Create stream
I0817 22:58:52.979634       7 log.go:172] (0x4002eb2420) (0x4000c240a0) Stream added, broadcasting: 5
I0817 22:58:52.980794       7 log.go:172] (0x4002eb2420) Reply frame received for 5
I0817 22:58:53.071736       7 log.go:172] (0x4002eb2420) Data frame received for 5
I0817 22:58:53.071939       7 log.go:172] (0x4000c240a0) (5) Data frame handling
I0817 22:58:53.072134       7 log.go:172] (0x4002eb2420) Data frame received for 3
I0817 22:58:53.072237       7 log.go:172] (0x40013fa0a0) (3) Data frame handling
I0817 22:58:53.072366       7 log.go:172] (0x40013fa0a0) (3) Data frame sent
I0817 22:58:53.072532       7 log.go:172] (0x4002eb2420) Data frame received for 3
I0817 22:58:53.072625       7 log.go:172] (0x40013fa0a0) (3) Data frame handling
I0817 22:58:53.073562       7 log.go:172] (0x4002eb2420) Data frame received for 1
I0817 22:58:53.073717       7 log.go:172] (0x40022f3f40) (1) Data frame handling
I0817 22:58:53.073840       7 log.go:172] (0x40022f3f40) (1) Data frame sent
I0817 22:58:53.073957       7 log.go:172] (0x4002eb2420) (0x40022f3f40) Stream removed, broadcasting: 1
I0817 22:58:53.074092       7 log.go:172] (0x4002eb2420) Go away received
I0817 22:58:53.074570       7 log.go:172] (0x4002eb2420) (0x40022f3f40) Stream removed, broadcasting: 1
I0817 22:58:53.074814       7 log.go:172] (0x4002eb2420) (0x40013fa0a0) Stream removed, broadcasting: 3
I0817 22:58:53.074943       7 log.go:172] (0x4002eb2420) (0x4000c240a0) Stream removed, broadcasting: 5
Aug 17 22:58:53.075: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:58:53.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2769" for this suite.

• [SLOW TEST:6.310 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":205,"skipped":3095,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:58:53.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:58:53.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-666" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":206,"skipped":3106,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:58:53.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:59:00.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3263" for this suite.

• [SLOW TEST:7.168 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":207,"skipped":3117,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:59:00.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:59:00.688: INFO: Creating deployment "test-recreate-deployment"
Aug 17 22:59:00.704: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 17 22:59:00.789: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 17 22:59:02.803: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 17 22:59:02.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301940, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301940, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301941, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301940, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 22:59:04.814: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 17 22:59:04.829: INFO: Updating deployment test-recreate-deployment
Aug 17 22:59:04.829: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 17 22:59:05.457: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-6875 /apis/apps/v1/namespaces/deployment-6875/deployments/test-recreate-deployment f74cfa36-b6e3-4c6a-adbf-29bb5b7e8c19 895396 2 2020-08-17 22:59:00 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40039e58a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-17 22:59:05 +0000 UTC,LastTransitionTime:2020-08-17 22:59:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-17 22:59:05 +0000 UTC,LastTransitionTime:2020-08-17 22:59:00 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 17 22:59:05.573: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-6875 /apis/apps/v1/namespaces/deployment-6875/replicasets/test-recreate-deployment-5f94c574ff 04c92940-acb2-40e2-b617-9dadca3b7d4c 895395 1 2020-08-17 22:59:04 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f74cfa36-b6e3-4c6a-adbf-29bb5b7e8c19 0x40039c6cb7 0x40039c6cb8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40039c6d18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 17 22:59:05.573: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 17 22:59:05.574: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-6875 /apis/apps/v1/namespaces/deployment-6875/replicasets/test-recreate-deployment-799c574856 04b6f3e6-7e2c-47d6-88c8-18f4c61bf3c8 895387 2 2020-08-17 22:59:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f74cfa36-b6e3-4c6a-adbf-29bb5b7e8c19 0x40039c6d87 0x40039c6d88}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40039c6df8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 17 22:59:05.595: INFO: Pod "test-recreate-deployment-5f94c574ff-r7j89" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-r7j89 test-recreate-deployment-5f94c574ff- deployment-6875 /api/v1/namespaces/deployment-6875/pods/test-recreate-deployment-5f94c574ff-r7j89 dc569080-202d-45ee-b4a2-e33ffb37ca44 895399 0 2020-08-17 22:59:04 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 04c92940-acb2-40e2-b617-9dadca3b7d4c 0x40039e5c47 0x40039e5c48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5l89z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5l89z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5l89z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 22:59:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 22:59:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 22:59:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 22:59:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 22:59:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:59:05.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6875" for this suite.

• [SLOW TEST:5.220 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":208,"skipped":3124,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:59:05.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 22:59:05.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 17 22:59:24.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4759 create -f -'
Aug 17 22:59:29.402: INFO: stderr: ""
Aug 17 22:59:29.403: INFO: stdout: "e2e-test-crd-publish-openapi-7084-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 17 22:59:29.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4759 delete e2e-test-crd-publish-openapi-7084-crds test-cr'
Aug 17 22:59:30.633: INFO: stderr: ""
Aug 17 22:59:30.633: INFO: stdout: "e2e-test-crd-publish-openapi-7084-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 17 22:59:30.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4759 apply -f -'
Aug 17 22:59:32.212: INFO: stderr: ""
Aug 17 22:59:32.212: INFO: stdout: "e2e-test-crd-publish-openapi-7084-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 17 22:59:32.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4759 delete e2e-test-crd-publish-openapi-7084-crds test-cr'
Aug 17 22:59:33.449: INFO: stderr: ""
Aug 17 22:59:33.449: INFO: stdout: "e2e-test-crd-publish-openapi-7084-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 17 22:59:33.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7084-crds'
Aug 17 22:59:35.001: INFO: stderr: ""
Aug 17 22:59:35.001: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7084-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 22:59:53.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4759" for this suite.

• [SLOW TEST:48.114 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":209,"skipped":3131,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 22:59:53.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 22:59:57.782: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 22:59:59.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301997, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301997, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301997, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301997, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:00:01.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301997, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301997, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301997, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733301997, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 23:00:04.919: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:00:04.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:00:06.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6970" for this suite.
STEP: Destroying namespace "webhook-6970-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.592 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":210,"skipped":3147,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:00:06.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 23:00:06.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d07f33fb-2acc-4a35-a950-bdb5c4666dd6" in namespace "downward-api-904" to be "success or failure"
Aug 17 23:00:06.433: INFO: Pod "downwardapi-volume-d07f33fb-2acc-4a35-a950-bdb5c4666dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 39.448714ms
Aug 17 23:00:08.438: INFO: Pod "downwardapi-volume-d07f33fb-2acc-4a35-a950-bdb5c4666dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044596599s
Aug 17 23:00:10.446: INFO: Pod "downwardapi-volume-d07f33fb-2acc-4a35-a950-bdb5c4666dd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052425123s
STEP: Saw pod success
Aug 17 23:00:10.446: INFO: Pod "downwardapi-volume-d07f33fb-2acc-4a35-a950-bdb5c4666dd6" satisfied condition "success or failure"
Aug 17 23:00:10.451: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d07f33fb-2acc-4a35-a950-bdb5c4666dd6 container client-container: 
STEP: delete the pod
Aug 17 23:00:10.502: INFO: Waiting for pod downwardapi-volume-d07f33fb-2acc-4a35-a950-bdb5c4666dd6 to disappear
Aug 17 23:00:10.535: INFO: Pod downwardapi-volume-d07f33fb-2acc-4a35-a950-bdb5c4666dd6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:00:10.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-904" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3167,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:00:10.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 23:00:12.927: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 23:00:15.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302012, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302012, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302012, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302012, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 23:00:18.066: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:00:30.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6673" for this suite.
STEP: Destroying namespace "webhook-6673-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.059 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":212,"skipped":3182,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:00:30.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e039fdfe-4396-4658-8c38-37dec0da7fe6
STEP: Creating a pod to test consume secrets
Aug 17 23:00:31.041: INFO: Waiting up to 5m0s for pod "pod-secrets-a0a818ca-4c58-486e-bdf1-c9910a85c395" in namespace "secrets-6226" to be "success or failure"
Aug 17 23:00:31.071: INFO: Pod "pod-secrets-a0a818ca-4c58-486e-bdf1-c9910a85c395": Phase="Pending", Reason="", readiness=false. Elapsed: 29.463087ms
Aug 17 23:00:33.078: INFO: Pod "pod-secrets-a0a818ca-4c58-486e-bdf1-c9910a85c395": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036445142s
Aug 17 23:00:35.084: INFO: Pod "pod-secrets-a0a818ca-4c58-486e-bdf1-c9910a85c395": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042451415s
Aug 17 23:00:37.091: INFO: Pod "pod-secrets-a0a818ca-4c58-486e-bdf1-c9910a85c395": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049960183s
STEP: Saw pod success
Aug 17 23:00:37.091: INFO: Pod "pod-secrets-a0a818ca-4c58-486e-bdf1-c9910a85c395" satisfied condition "success or failure"
Aug 17 23:00:37.096: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a0a818ca-4c58-486e-bdf1-c9910a85c395 container secret-volume-test: 
STEP: delete the pod
Aug 17 23:00:37.145: INFO: Waiting for pod pod-secrets-a0a818ca-4c58-486e-bdf1-c9910a85c395 to disappear
Aug 17 23:00:37.158: INFO: Pod pod-secrets-a0a818ca-4c58-486e-bdf1-c9910a85c395 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:00:37.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6226" for this suite.
STEP: Destroying namespace "secret-namespace-1064" for this suite.

• [SLOW TEST:6.567 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3184,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:00:37.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 17 23:00:37.293: INFO: Waiting up to 5m0s for pod "pod-bbd89fc6-60b7-459e-9b91-0123e52d0c61" in namespace "emptydir-5271" to be "success or failure"
Aug 17 23:00:37.296: INFO: Pod "pod-bbd89fc6-60b7-459e-9b91-0123e52d0c61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.434865ms
Aug 17 23:00:39.302: INFO: Pod "pod-bbd89fc6-60b7-459e-9b91-0123e52d0c61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009787485s
Aug 17 23:00:41.416: INFO: Pod "pod-bbd89fc6-60b7-459e-9b91-0123e52d0c61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123233975s
STEP: Saw pod success
Aug 17 23:00:41.416: INFO: Pod "pod-bbd89fc6-60b7-459e-9b91-0123e52d0c61" satisfied condition "success or failure"
Aug 17 23:00:41.512: INFO: Trying to get logs from node jerma-worker2 pod pod-bbd89fc6-60b7-459e-9b91-0123e52d0c61 container test-container: 
STEP: delete the pod
Aug 17 23:00:41.535: INFO: Waiting for pod pod-bbd89fc6-60b7-459e-9b91-0123e52d0c61 to disappear
Aug 17 23:00:41.542: INFO: Pod pod-bbd89fc6-60b7-459e-9b91-0123e52d0c61 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:00:41.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5271" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3185,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:00:41.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-b4a48b1e-bfa7-414c-8bd1-12f3eaae6306
STEP: Creating a pod to test consume secrets
Aug 17 23:00:41.697: INFO: Waiting up to 5m0s for pod "pod-secrets-824954cb-cba0-48a1-af92-56594afd9792" in namespace "secrets-2730" to be "success or failure"
Aug 17 23:00:41.721: INFO: Pod "pod-secrets-824954cb-cba0-48a1-af92-56594afd9792": Phase="Pending", Reason="", readiness=false. Elapsed: 23.446672ms
Aug 17 23:00:43.727: INFO: Pod "pod-secrets-824954cb-cba0-48a1-af92-56594afd9792": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029968984s
Aug 17 23:00:45.824: INFO: Pod "pod-secrets-824954cb-cba0-48a1-af92-56594afd9792": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126986811s
Aug 17 23:00:47.973: INFO: Pod "pod-secrets-824954cb-cba0-48a1-af92-56594afd9792": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.27535281s
STEP: Saw pod success
Aug 17 23:00:47.973: INFO: Pod "pod-secrets-824954cb-cba0-48a1-af92-56594afd9792" satisfied condition "success or failure"
Aug 17 23:00:48.483: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-824954cb-cba0-48a1-af92-56594afd9792 container secret-volume-test: 
STEP: delete the pod
Aug 17 23:00:49.347: INFO: Waiting for pod pod-secrets-824954cb-cba0-48a1-af92-56594afd9792 to disappear
Aug 17 23:00:49.408: INFO: Pod pod-secrets-824954cb-cba0-48a1-af92-56594afd9792 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:00:49.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2730" for this suite.

• [SLOW TEST:8.547 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3187,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:00:50.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-2812
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2812 to expose endpoints map[]
Aug 17 23:00:52.153: INFO: successfully validated that service multi-endpoint-test in namespace services-2812 exposes endpoints map[] (575.309226ms elapsed)
STEP: Creating pod pod1 in namespace services-2812
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2812 to expose endpoints map[pod1:[100]]
Aug 17 23:00:59.755: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (6.892383921s elapsed, will retry)
Aug 17 23:01:01.824: INFO: successfully validated that service multi-endpoint-test in namespace services-2812 exposes endpoints map[pod1:[100]] (8.961446831s elapsed)
STEP: Creating pod pod2 in namespace services-2812
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2812 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 17 23:01:06.444: INFO: successfully validated that service multi-endpoint-test in namespace services-2812 exposes endpoints map[pod1:[100] pod2:[101]] (4.439722461s elapsed)
STEP: Deleting pod pod1 in namespace services-2812
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2812 to expose endpoints map[pod2:[101]]
Aug 17 23:01:06.496: INFO: successfully validated that service multi-endpoint-test in namespace services-2812 exposes endpoints map[pod2:[101]] (43.864298ms elapsed)
STEP: Deleting pod pod2 in namespace services-2812
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2812 to expose endpoints map[]
Aug 17 23:01:06.607: INFO: successfully validated that service multi-endpoint-test in namespace services-2812 exposes endpoints map[] (63.967628ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:01:07.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2812" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.538 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":216,"skipped":3193,"failed":0}
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:01:07.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:01:25.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4182" for this suite.

• [SLOW TEST:17.634 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":217,"skipped":3193,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:01:25.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 17 23:01:31.878: INFO: Successfully updated pod "pod-update-24edbca8-d998-4aeb-8ce4-005ecc009633"
STEP: verifying the updated pod is in kubernetes
Aug 17 23:01:31.894: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:01:31.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-700" for this suite.

• [SLOW TEST:6.628 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3220,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:01:31.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 17 23:01:32.014: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 17 23:01:32.039: INFO: Waiting for terminating namespaces to be deleted...
Aug 17 23:01:32.043: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 17 23:01:32.057: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 23:01:32.057: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 17 23:01:32.057: INFO: pod-update-24edbca8-d998-4aeb-8ce4-005ecc009633 from pods-700 started at 2020-08-17 23:01:25 +0000 UTC (1 container statuses recorded)
Aug 17 23:01:32.057: INFO: 	Container nginx ready: true, restart count 0
Aug 17 23:01:32.057: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 23:01:32.057: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 17 23:01:32.058: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 17 23:01:32.070: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 23:01:32.070: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 17 23:01:32.070: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 17 23:01:32.070: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-bd27e3e2-36f4-4de2-a0d3-8d64ac108b4f 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-bd27e3e2-36f4-4de2-a0d3-8d64ac108b4f off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-bd27e3e2-36f4-4de2-a0d3-8d64ac108b4f
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:01:48.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2659" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.411 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":219,"skipped":3261,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:01:48.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 17 23:01:48.384: INFO: Waiting up to 5m0s for pod "pod-70ec5ef1-e523-4eba-9000-d2a2271014df" in namespace "emptydir-4065" to be "success or failure"
Aug 17 23:01:48.429: INFO: Pod "pod-70ec5ef1-e523-4eba-9000-d2a2271014df": Phase="Pending", Reason="", readiness=false. Elapsed: 44.624575ms
Aug 17 23:01:50.434: INFO: Pod "pod-70ec5ef1-e523-4eba-9000-d2a2271014df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050181038s
Aug 17 23:01:52.439: INFO: Pod "pod-70ec5ef1-e523-4eba-9000-d2a2271014df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055560503s
STEP: Saw pod success
Aug 17 23:01:52.440: INFO: Pod "pod-70ec5ef1-e523-4eba-9000-d2a2271014df" satisfied condition "success or failure"
Aug 17 23:01:52.443: INFO: Trying to get logs from node jerma-worker2 pod pod-70ec5ef1-e523-4eba-9000-d2a2271014df container test-container: 
STEP: delete the pod
Aug 17 23:01:52.609: INFO: Waiting for pod pod-70ec5ef1-e523-4eba-9000-d2a2271014df to disappear
Aug 17 23:01:52.675: INFO: Pod pod-70ec5ef1-e523-4eba-9000-d2a2271014df no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:01:52.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4065" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3353,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:01:52.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2846
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-2846
Aug 17 23:01:52.830: INFO: Found 0 stateful pods, waiting for 1
Aug 17 23:02:02.931: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 17 23:02:02.955: INFO: Deleting all statefulset in ns statefulset-2846
Aug 17 23:02:02.962: INFO: Scaling statefulset ss to 0
Aug 17 23:02:23.227: INFO: Waiting for statefulset status.replicas updated to 0
Aug 17 23:02:23.233: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:02:23.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2846" for this suite.

• [SLOW TEST:30.639 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":221,"skipped":3363,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:02:23.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 23:02:26.555: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 23:02:28.703: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302146, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302146, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302146, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302146, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:02:30.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302146, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302146, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302146, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302146, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 23:02:33.758: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 17 23:02:33.789: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:02:34.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3800" for this suite.
STEP: Destroying namespace "webhook-3800-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.518 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":222,"skipped":3363,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:02:34.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 17 23:02:35.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8475'
Aug 17 23:02:37.187: INFO: stderr: ""
Aug 17 23:02:37.187: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 17 23:02:37.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8475'
Aug 17 23:02:38.464: INFO: stderr: ""
Aug 17 23:02:38.464: INFO: stdout: "update-demo-nautilus-jq8zj update-demo-nautilus-r9dbm "
Aug 17 23:02:38.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jq8zj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8475'
Aug 17 23:02:40.026: INFO: stderr: ""
Aug 17 23:02:40.026: INFO: stdout: ""
Aug 17 23:02:40.026: INFO: update-demo-nautilus-jq8zj is created but not running
Aug 17 23:02:45.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8475'
Aug 17 23:02:46.298: INFO: stderr: ""
Aug 17 23:02:46.298: INFO: stdout: "update-demo-nautilus-jq8zj update-demo-nautilus-r9dbm "
Aug 17 23:02:46.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jq8zj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8475'
Aug 17 23:02:47.538: INFO: stderr: ""
Aug 17 23:02:47.538: INFO: stdout: "true"
Aug 17 23:02:47.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jq8zj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8475'
Aug 17 23:02:48.812: INFO: stderr: ""
Aug 17 23:02:48.812: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 17 23:02:48.813: INFO: validating pod update-demo-nautilus-jq8zj
Aug 17 23:02:49.102: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 17 23:02:49.102: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 17 23:02:49.102: INFO: update-demo-nautilus-jq8zj is verified up and running
Aug 17 23:02:49.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9dbm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8475'
Aug 17 23:02:50.388: INFO: stderr: ""
Aug 17 23:02:50.388: INFO: stdout: "true"
Aug 17 23:02:50.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9dbm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8475'
Aug 17 23:02:51.680: INFO: stderr: ""
Aug 17 23:02:51.680: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 17 23:02:51.680: INFO: validating pod update-demo-nautilus-r9dbm
Aug 17 23:02:51.686: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 17 23:02:51.686: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 17 23:02:51.686: INFO: update-demo-nautilus-r9dbm is verified up and running
STEP: using delete to clean up resources
Aug 17 23:02:51.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8475'
Aug 17 23:02:52.927: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 17 23:02:52.927: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 17 23:02:52.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8475'
Aug 17 23:02:54.230: INFO: stderr: "No resources found in kubectl-8475 namespace.\n"
Aug 17 23:02:54.230: INFO: stdout: ""
Aug 17 23:02:54.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8475 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 17 23:02:55.639: INFO: stderr: ""
Aug 17 23:02:55.639: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:02:55.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8475" for this suite.

• [SLOW TEST:21.064 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":223,"skipped":3393,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:02:55.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 17 23:03:04.455: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 17 23:03:04.461: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 17 23:03:06.461: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 17 23:03:06.468: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 17 23:03:08.461: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 17 23:03:08.667: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 17 23:03:10.461: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 17 23:03:10.467: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 17 23:03:12.461: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 17 23:03:12.467: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:03:12.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4897" for this suite.

• [SLOW TEST:16.561 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3418,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:03:12.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:03:12.797: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 17 23:03:12.820: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 17 23:03:17.957: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 17 23:03:20.106: INFO: Creating deployment "test-rolling-update-deployment"
Aug 17 23:03:20.113: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 17 23:03:20.181: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 17 23:03:22.520: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 17 23:03:22.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302200, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302200, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302200, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302200, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:03:24.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302200, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302200, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302203, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302200, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:03:26.532: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 17 23:03:26.547: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-533 /apis/apps/v1/namespaces/deployment-533/deployments/test-rolling-update-deployment dbcc8719-e03c-43a0-9f58-096a78584169 896954 1 2020-08-17 23:03:20 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400592d7f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-17 23:03:20 +0000 UTC,LastTransitionTime:2020-08-17 23:03:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-17 23:03:24 +0000 UTC,LastTransitionTime:2020-08-17 23:03:20 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 17 23:03:26.553: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-533 /apis/apps/v1/namespaces/deployment-533/replicasets/test-rolling-update-deployment-67cf4f6444 5e73eb4f-de51-4f6a-950d-3f105614eda6 896942 1 2020-08-17 23:03:20 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment dbcc8719-e03c-43a0-9f58-096a78584169 0x400592dc97 0x400592dc98}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400592dd18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 17 23:03:26.553: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 17 23:03:26.554: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-533 /apis/apps/v1/namespaces/deployment-533/replicasets/test-rolling-update-controller ee3bfc01-8338-4262-b942-28400d8ec55d 896952 2 2020-08-17 23:03:12 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment dbcc8719-e03c-43a0-9f58-096a78584169 0x400592dbb7 0x400592dbb8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x400592dc28  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 17 23:03:26.560: INFO: Pod "test-rolling-update-deployment-67cf4f6444-6w7mw" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-6w7mw test-rolling-update-deployment-67cf4f6444- deployment-533 /api/v1/namespaces/deployment-533/pods/test-rolling-update-deployment-67cf4f6444-6w7mw a9259e8a-af46-4723-808d-314ea9b0f899 896941 0 2020-08-17 23:03:20 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 5e73eb4f-de51-4f6a-950d-3f105614eda6 0x40068fe197 0x40068fe198}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ml84s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ml84s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ml84s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:03:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:03:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.118,StartTime:2020-08-17 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:03:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://ce72a149bb5f41e62921c5ab4495f72698cfab7ff7823d6d5bba890a2d9a37ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:03:26.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-533" for this suite.

• [SLOW TEST:14.091 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":225,"skipped":3462,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:03:26.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-11c9e0fe-e2b5-432a-9388-a426b957c4b6
STEP: Creating a pod to test consume secrets
Aug 17 23:03:26.796: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da68634c-07b8-427c-8cf4-678912511645" in namespace "projected-4269" to be "success or failure"
Aug 17 23:03:26.861: INFO: Pod "pod-projected-secrets-da68634c-07b8-427c-8cf4-678912511645": Phase="Pending", Reason="", readiness=false. Elapsed: 64.737612ms
Aug 17 23:03:28.868: INFO: Pod "pod-projected-secrets-da68634c-07b8-427c-8cf4-678912511645": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071967783s
Aug 17 23:03:30.875: INFO: Pod "pod-projected-secrets-da68634c-07b8-427c-8cf4-678912511645": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078629577s
Aug 17 23:03:32.881: INFO: Pod "pod-projected-secrets-da68634c-07b8-427c-8cf4-678912511645": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085323994s
STEP: Saw pod success
Aug 17 23:03:32.882: INFO: Pod "pod-projected-secrets-da68634c-07b8-427c-8cf4-678912511645" satisfied condition "success or failure"
Aug 17 23:03:32.887: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-da68634c-07b8-427c-8cf4-678912511645 container secret-volume-test: 
STEP: delete the pod
Aug 17 23:03:32.939: INFO: Waiting for pod pod-projected-secrets-da68634c-07b8-427c-8cf4-678912511645 to disappear
Aug 17 23:03:32.946: INFO: Pod pod-projected-secrets-da68634c-07b8-427c-8cf4-678912511645 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:03:32.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4269" for this suite.

• [SLOW TEST:6.386 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3486,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:03:32.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-4210
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-4210
STEP: Deleting pre-stop pod
Aug 17 23:03:46.149: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:03:46.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-4210" for this suite.

• [SLOW TEST:13.209 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":227,"skipped":3552,"failed":0}
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:03:46.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-946643d2-8e9f-4e88-b43d-58293857d6a3
STEP: Creating a pod to test consume secrets
Aug 17 23:03:47.363: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b8b28c1-593a-4624-af6a-41d4f43d2c59" in namespace "projected-9153" to be "success or failure"
Aug 17 23:03:47.420: INFO: Pod "pod-projected-secrets-7b8b28c1-593a-4624-af6a-41d4f43d2c59": Phase="Pending", Reason="", readiness=false. Elapsed: 57.272774ms
Aug 17 23:03:49.495: INFO: Pod "pod-projected-secrets-7b8b28c1-593a-4624-af6a-41d4f43d2c59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131894951s
Aug 17 23:03:51.736: INFO: Pod "pod-projected-secrets-7b8b28c1-593a-4624-af6a-41d4f43d2c59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372680427s
Aug 17 23:03:53.742: INFO: Pod "pod-projected-secrets-7b8b28c1-593a-4624-af6a-41d4f43d2c59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.379286985s
STEP: Saw pod success
Aug 17 23:03:53.743: INFO: Pod "pod-projected-secrets-7b8b28c1-593a-4624-af6a-41d4f43d2c59" satisfied condition "success or failure"
Aug 17 23:03:53.791: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7b8b28c1-593a-4624-af6a-41d4f43d2c59 container projected-secret-volume-test: 
STEP: delete the pod
Aug 17 23:03:53.808: INFO: Waiting for pod pod-projected-secrets-7b8b28c1-593a-4624-af6a-41d4f43d2c59 to disappear
Aug 17 23:03:53.813: INFO: Pod pod-projected-secrets-7b8b28c1-593a-4624-af6a-41d4f43d2c59 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:03:53.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9153" for this suite.

• [SLOW TEST:7.644 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3552,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:03:53.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7555
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7555
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7555
Aug 17 23:03:53.946: INFO: Found 0 stateful pods, waiting for 1
Aug 17 23:04:03.954: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 17 23:04:03.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 17 23:04:05.411: INFO: stderr: "I0817 23:04:05.284044    4728 log.go:172] (0x4000126f20) (0x4000a50000) Create stream\nI0817 23:04:05.289673    4728 log.go:172] (0x4000126f20) (0x4000a50000) Stream added, broadcasting: 1\nI0817 23:04:05.300391    4728 log.go:172] (0x4000126f20) Reply frame received for 1\nI0817 23:04:05.300988    4728 log.go:172] (0x4000126f20) (0x4000a500a0) Create stream\nI0817 23:04:05.301046    4728 log.go:172] (0x4000126f20) (0x4000a500a0) Stream added, broadcasting: 3\nI0817 23:04:05.302524    4728 log.go:172] (0x4000126f20) Reply frame received for 3\nI0817 23:04:05.302942    4728 log.go:172] (0x4000126f20) (0x4000a50140) Create stream\nI0817 23:04:05.303032    4728 log.go:172] (0x4000126f20) (0x4000a50140) Stream added, broadcasting: 5\nI0817 23:04:05.304463    4728 log.go:172] (0x4000126f20) Reply frame received for 5\nI0817 23:04:05.362001    4728 log.go:172] (0x4000126f20) Data frame received for 5\nI0817 23:04:05.362285    4728 log.go:172] (0x4000a50140) (5) Data frame handling\nI0817 23:04:05.362916    4728 log.go:172] (0x4000a50140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 23:04:05.386569    4728 log.go:172] (0x4000126f20) Data frame received for 5\nI0817 23:04:05.386680    4728 log.go:172] (0x4000a50140) (5) Data frame handling\nI0817 23:04:05.386832    4728 log.go:172] (0x4000126f20) Data frame received for 3\nI0817 23:04:05.386966    4728 log.go:172] (0x4000a500a0) (3) Data frame handling\nI0817 23:04:05.387092    4728 log.go:172] (0x4000a500a0) (3) Data frame sent\nI0817 23:04:05.387206    4728 log.go:172] (0x4000126f20) Data frame received for 3\nI0817 23:04:05.387298    4728 log.go:172] (0x4000a500a0) (3) Data frame handling\nI0817 23:04:05.388835    4728 log.go:172] (0x4000126f20) Data frame received for 1\nI0817 23:04:05.389012    4728 log.go:172] (0x4000a50000) (1) Data frame handling\nI0817 23:04:05.389168    4728 log.go:172] (0x4000a50000) (1) Data frame sent\nI0817 23:04:05.390706    4728 log.go:172] (0x4000126f20) (0x4000a50000) Stream removed, broadcasting: 1\nI0817 23:04:05.394561    4728 log.go:172] (0x4000126f20) Go away received\nI0817 23:04:05.398343    4728 log.go:172] (0x4000126f20) (0x4000a50000) Stream removed, broadcasting: 1\nI0817 23:04:05.398692    4728 log.go:172] (0x4000126f20) (0x4000a500a0) Stream removed, broadcasting: 3\nI0817 23:04:05.398935    4728 log.go:172] (0x4000126f20) (0x4000a50140) Stream removed, broadcasting: 5\n"
Aug 17 23:04:05.412: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 17 23:04:05.412: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 17 23:04:05.418: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 17 23:04:15.427: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 17 23:04:15.428: INFO: Waiting for statefulset status.replicas updated to 0
Aug 17 23:04:15.450: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999995584s
Aug 17 23:04:16.526: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992059192s
Aug 17 23:04:17.535: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.915781333s
Aug 17 23:04:18.546: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.907273888s
Aug 17 23:04:19.553: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.895863367s
Aug 17 23:04:20.561: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.888733753s
Aug 17 23:04:21.641: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.881410703s
Aug 17 23:04:22.649: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.801058175s
Aug 17 23:04:23.657: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.792867561s
Aug 17 23:04:24.663: INFO: Verifying statefulset ss doesn't scale past 1 for another 784.673166ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7555
Aug 17 23:04:25.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:04:27.163: INFO: stderr: "I0817 23:04:27.074188    4752 log.go:172] (0x4000a50c60) (0x40006e81e0) Create stream\nI0817 23:04:27.077774    4752 log.go:172] (0x4000a50c60) (0x40006e81e0) Stream added, broadcasting: 1\nI0817 23:04:27.090383    4752 log.go:172] (0x4000a50c60) Reply frame received for 1\nI0817 23:04:27.091205    4752 log.go:172] (0x4000a50c60) (0x40007ee000) Create stream\nI0817 23:04:27.091282    4752 log.go:172] (0x4000a50c60) (0x40007ee000) Stream added, broadcasting: 3\nI0817 23:04:27.093455    4752 log.go:172] (0x4000a50c60) Reply frame received for 3\nI0817 23:04:27.094053    4752 log.go:172] (0x4000a50c60) (0x40006e8280) Create stream\nI0817 23:04:27.094166    4752 log.go:172] (0x4000a50c60) (0x40006e8280) Stream added, broadcasting: 5\nI0817 23:04:27.095983    4752 log.go:172] (0x4000a50c60) Reply frame received for 5\nI0817 23:04:27.143535    4752 log.go:172] (0x4000a50c60) Data frame received for 3\nI0817 23:04:27.143919    4752 log.go:172] (0x40007ee000) (3) Data frame handling\nI0817 23:04:27.144290    4752 log.go:172] (0x4000a50c60) Data frame received for 5\nI0817 23:04:27.144498    4752 log.go:172] (0x40006e8280) (5) Data frame handling\nI0817 23:04:27.144600    4752 log.go:172] (0x4000a50c60) Data frame received for 1\nI0817 23:04:27.144683    4752 log.go:172] (0x40007ee000) (3) Data frame sent\nI0817 23:04:27.144931    4752 log.go:172] (0x40006e81e0) (1) Data frame handling\nI0817 23:04:27.145031    4752 log.go:172] (0x40006e81e0) (1) Data frame sent\nI0817 23:04:27.145103    4752 log.go:172] (0x4000a50c60) Data frame received for 3\nI0817 23:04:27.145167    4752 log.go:172] (0x40007ee000) (3) Data frame handling\nI0817 23:04:27.145548    4752 log.go:172] (0x40006e8280) (5) Data frame sent\nI0817 23:04:27.145613    4752 log.go:172] (0x4000a50c60) Data frame received for 5\nI0817 23:04:27.145659    4752 log.go:172] (0x40006e8280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 23:04:27.147765    4752 log.go:172] (0x4000a50c60) (0x40006e81e0) Stream removed, broadcasting: 1\nI0817 23:04:27.150165    4752 log.go:172] (0x4000a50c60) Go away received\nI0817 23:04:27.153371    4752 log.go:172] (0x4000a50c60) (0x40006e81e0) Stream removed, broadcasting: 1\nI0817 23:04:27.154297    4752 log.go:172] (0x4000a50c60) (0x40007ee000) Stream removed, broadcasting: 3\nI0817 23:04:27.154529    4752 log.go:172] (0x4000a50c60) (0x40006e8280) Stream removed, broadcasting: 5\n"
Aug 17 23:04:27.164: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 17 23:04:27.165: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 17 23:04:27.170: INFO: Found 1 stateful pods, waiting for 3
Aug 17 23:04:37.524: INFO: Found 2 stateful pods, waiting for 3
Aug 17 23:04:47.181: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 23:04:47.181: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 17 23:04:47.181: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 17 23:04:47.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 17 23:04:48.646: INFO: stderr: "I0817 23:04:48.518467    4775 log.go:172] (0x4000984000) (0x40009a4000) Create stream\nI0817 23:04:48.524111    4775 log.go:172] (0x4000984000) (0x40009a4000) Stream added, broadcasting: 1\nI0817 23:04:48.535234    4775 log.go:172] (0x4000984000) Reply frame received for 1\nI0817 23:04:48.535783    4775 log.go:172] (0x4000984000) (0x4000b0a000) Create stream\nI0817 23:04:48.535840    4775 log.go:172] (0x4000984000) (0x4000b0a000) Stream added, broadcasting: 3\nI0817 23:04:48.537265    4775 log.go:172] (0x4000984000) Reply frame received for 3\nI0817 23:04:48.537647    4775 log.go:172] (0x4000984000) (0x40004fd5e0) Create stream\nI0817 23:04:48.537743    4775 log.go:172] (0x4000984000) (0x40004fd5e0) Stream added, broadcasting: 5\nI0817 23:04:48.539747    4775 log.go:172] (0x4000984000) Reply frame received for 5\nI0817 23:04:48.625200    4775 log.go:172] (0x4000984000) Data frame received for 5\nI0817 23:04:48.625432    4775 log.go:172] (0x4000984000) Data frame received for 3\nI0817 23:04:48.625628    4775 log.go:172] (0x4000b0a000) (3) Data frame handling\nI0817 23:04:48.625822    4775 log.go:172] (0x40004fd5e0) (5) Data frame handling\nI0817 23:04:48.627052    4775 log.go:172] (0x4000984000) Data frame received for 1\nI0817 23:04:48.627186    4775 log.go:172] (0x40009a4000) (1) Data frame handling\nI0817 23:04:48.627524    4775 log.go:172] (0x4000b0a000) (3) Data frame sent\nI0817 23:04:48.627869    4775 log.go:172] (0x4000984000) Data frame received for 3\nI0817 23:04:48.627975    4775 log.go:172] (0x4000b0a000) (3) Data frame handling\nI0817 23:04:48.628676    4775 log.go:172] (0x40009a4000) (1) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 23:04:48.629409    4775 log.go:172] (0x40004fd5e0) (5) Data frame sent\nI0817 23:04:48.629520    4775 log.go:172] (0x4000984000) Data frame received for 5\nI0817 23:04:48.629586    4775 log.go:172] (0x40004fd5e0) (5) Data frame handling\nI0817 23:04:48.631671    4775 log.go:172] (0x4000984000) (0x40009a4000) Stream removed, broadcasting: 1\nI0817 23:04:48.633833    4775 log.go:172] (0x4000984000) Go away received\nI0817 23:04:48.637106    4775 log.go:172] (0x4000984000) (0x40009a4000) Stream removed, broadcasting: 1\nI0817 23:04:48.638157    4775 log.go:172] (0x4000984000) (0x4000b0a000) Stream removed, broadcasting: 3\nI0817 23:04:48.638373    4775 log.go:172] (0x4000984000) (0x40004fd5e0) Stream removed, broadcasting: 5\n"
Aug 17 23:04:48.647: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 17 23:04:48.648: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 17 23:04:48.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 17 23:04:50.201: INFO: stderr: "I0817 23:04:50.019610    4800 log.go:172] (0x40001222c0) (0x40006586e0) Create stream\nI0817 23:04:50.023154    4800 log.go:172] (0x40001222c0) (0x40006586e0) Stream added, broadcasting: 1\nI0817 23:04:50.036652    4800 log.go:172] (0x40001222c0) Reply frame received for 1\nI0817 23:04:50.037918    4800 log.go:172] (0x40001222c0) (0x400078fe00) Create stream\nI0817 23:04:50.038035    4800 log.go:172] (0x40001222c0) (0x400078fe00) Stream added, broadcasting: 3\nI0817 23:04:50.039888    4800 log.go:172] (0x40001222c0) Reply frame received for 3\nI0817 23:04:50.040382    4800 log.go:172] (0x40001222c0) (0x40009ee000) Create stream\nI0817 23:04:50.040508    4800 log.go:172] (0x40001222c0) (0x40009ee000) Stream added, broadcasting: 5\nI0817 23:04:50.042158    4800 log.go:172] (0x40001222c0) Reply frame received for 5\nI0817 23:04:50.125552    4800 log.go:172] (0x40001222c0) Data frame received for 5\nI0817 23:04:50.125816    4800 log.go:172] (0x40009ee000) (5) Data frame handling\nI0817 23:04:50.126376    4800 log.go:172] (0x40009ee000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 23:04:50.184690    4800 log.go:172] (0x40001222c0) Data frame received for 5\nI0817 23:04:50.185268    4800 log.go:172] (0x40009ee000) (5) Data frame handling\nI0817 23:04:50.185543    4800 log.go:172] (0x40001222c0) Data frame received for 3\nI0817 23:04:50.186136    4800 log.go:172] (0x40001222c0) Data frame received for 1\nI0817 23:04:50.186249    4800 log.go:172] (0x40006586e0) (1) Data frame handling\nI0817 23:04:50.186378    4800 log.go:172] (0x400078fe00) (3) Data frame handling\nI0817 23:04:50.186513    4800 log.go:172] (0x400078fe00) (3) Data frame sent\nI0817 23:04:50.186659    4800 log.go:172] (0x40001222c0) Data frame received for 3\nI0817 23:04:50.186816    4800 log.go:172] (0x40006586e0) (1) Data frame sent\nI0817 23:04:50.186977    4800 log.go:172] (0x400078fe00) (3) Data frame handling\nI0817 23:04:50.187610    4800 log.go:172] (0x40001222c0) (0x40006586e0) Stream removed, broadcasting: 1\nI0817 23:04:50.191095    4800 log.go:172] (0x40001222c0) Go away received\nI0817 23:04:50.193802    4800 log.go:172] (0x40001222c0) (0x40006586e0) Stream removed, broadcasting: 1\nI0817 23:04:50.194024    4800 log.go:172] (0x40001222c0) (0x400078fe00) Stream removed, broadcasting: 3\nI0817 23:04:50.194173    4800 log.go:172] (0x40001222c0) (0x40009ee000) Stream removed, broadcasting: 5\n"
Aug 17 23:04:50.202: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 17 23:04:50.202: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 17 23:04:50.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 17 23:04:51.709: INFO: stderr: "I0817 23:04:51.541236    4824 log.go:172] (0x4000104dc0) (0x40006e3c20) Create stream\nI0817 23:04:51.545940    4824 log.go:172] (0x4000104dc0) (0x40006e3c20) Stream added, broadcasting: 1\nI0817 23:04:51.561553    4824 log.go:172] (0x4000104dc0) Reply frame received for 1\nI0817 23:04:51.562470    4824 log.go:172] (0x4000104dc0) (0x40008e0000) Create stream\nI0817 23:04:51.562552    4824 log.go:172] (0x4000104dc0) (0x40008e0000) Stream added, broadcasting: 3\nI0817 23:04:51.564571    4824 log.go:172] (0x4000104dc0) Reply frame received for 3\nI0817 23:04:51.565160    4824 log.go:172] (0x4000104dc0) (0x40008e00a0) Create stream\nI0817 23:04:51.565265    4824 log.go:172] (0x4000104dc0) (0x40008e00a0) Stream added, broadcasting: 5\nI0817 23:04:51.566573    4824 log.go:172] (0x4000104dc0) Reply frame received for 5\nI0817 23:04:51.629845    4824 log.go:172] (0x4000104dc0) Data frame received for 5\nI0817 23:04:51.630070    4824 log.go:172] (0x40008e00a0) (5) Data frame handling\nI0817 23:04:51.630577    4824 log.go:172] (0x40008e00a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0817 23:04:51.684635    4824 log.go:172] (0x4000104dc0) Data frame received for 3\nI0817 23:04:51.684908    4824 log.go:172] (0x40008e0000) (3) Data frame handling\nI0817 23:04:51.685012    4824 log.go:172] (0x40008e0000) (3) Data frame sent\nI0817 23:04:51.685095    4824 log.go:172] (0x4000104dc0) Data frame received for 3\nI0817 23:04:51.685162    4824 log.go:172] (0x40008e0000) (3) Data frame handling\nI0817 23:04:51.686115    4824 log.go:172] (0x4000104dc0) Data frame received for 5\nI0817 23:04:51.686335    4824 log.go:172] (0x40008e00a0) (5) Data frame handling\nI0817 23:04:51.694141    4824 log.go:172] (0x4000104dc0) Data frame received for 1\nI0817 23:04:51.694252    4824 log.go:172] (0x40006e3c20) (1) Data frame handling\nI0817 23:04:51.694334    4824 log.go:172] (0x40006e3c20) (1) Data frame sent\nI0817 23:04:51.694840    4824 log.go:172] (0x4000104dc0) (0x40006e3c20) Stream removed, broadcasting: 1\nI0817 23:04:51.697317    4824 log.go:172] (0x4000104dc0) Go away received\nI0817 23:04:51.698726    4824 log.go:172] (0x4000104dc0) (0x40006e3c20) Stream removed, broadcasting: 1\nI0817 23:04:51.699145    4824 log.go:172] (0x4000104dc0) (0x40008e0000) Stream removed, broadcasting: 3\nI0817 23:04:51.699443    4824 log.go:172] (0x4000104dc0) (0x40008e00a0) Stream removed, broadcasting: 5\n"
Aug 17 23:04:51.710: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 17 23:04:51.710: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 17 23:04:51.710: INFO: Waiting for statefulset status.replicas updated to 0
Aug 17 23:04:51.753: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 17 23:05:01.777: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 17 23:05:01.777: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 17 23:05:01.777: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 17 23:05:01.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999994302s
Aug 17 23:05:02.821: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977661094s
Aug 17 23:05:03.830: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968879328s
Aug 17 23:05:04.840: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.960292927s
Aug 17 23:05:05.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.950125946s
Aug 17 23:05:06.858: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.941992426s
Aug 17 23:05:07.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.932230639s
Aug 17 23:05:08.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.923290411s
Aug 17 23:05:09.880: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.916529345s
Aug 17 23:05:10.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 910.29998ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7555
Aug 17 23:05:11.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:05:13.600: INFO: stderr: "I0817 23:05:13.475277    4848 log.go:172] (0x4000acc000) (0x4000bd8000) Create stream\nI0817 23:05:13.477863    4848 log.go:172] (0x4000acc000) (0x4000bd8000) Stream added, broadcasting: 1\nI0817 23:05:13.485992    4848 log.go:172] (0x4000acc000) Reply frame received for 1\nI0817 23:05:13.486541    4848 log.go:172] (0x4000acc000) (0x4000aa8000) Create stream\nI0817 23:05:13.486601    4848 log.go:172] (0x4000acc000) (0x4000aa8000) Stream added, broadcasting: 3\nI0817 23:05:13.488163    4848 log.go:172] (0x4000acc000) Reply frame received for 3\nI0817 23:05:13.488693    4848 log.go:172] (0x4000acc000) (0x4000bd80a0) Create stream\nI0817 23:05:13.488924    4848 log.go:172] (0x4000acc000) (0x4000bd80a0) Stream added, broadcasting: 5\nI0817 23:05:13.490946    4848 log.go:172] (0x4000acc000) Reply frame received for 5\nI0817 23:05:13.566153    4848 log.go:172] (0x4000acc000) Data frame received for 5\nI0817 23:05:13.566545    4848 log.go:172] (0x4000bd80a0) (5) Data frame handling\nI0817 23:05:13.567304    4848 log.go:172] (0x4000bd80a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 23:05:13.583027    4848 log.go:172] (0x4000acc000) Data frame received for 3\nI0817 23:05:13.583161    4848 log.go:172] (0x4000aa8000) (3) Data frame handling\nI0817 23:05:13.583256    4848 log.go:172] (0x4000acc000) Data frame received for 5\nI0817 23:05:13.583355    4848 log.go:172] (0x4000bd80a0) (5) Data frame handling\nI0817 23:05:13.583488    4848 log.go:172] (0x4000aa8000) (3) Data frame sent\nI0817 23:05:13.583556    4848 log.go:172] (0x4000acc000) Data frame received for 3\nI0817 23:05:13.583612    4848 log.go:172] (0x4000aa8000) (3) Data frame handling\nI0817 23:05:13.584520    4848 log.go:172] (0x4000acc000) Data frame received for 1\nI0817 23:05:13.584642    4848 log.go:172] (0x4000bd8000) (1) Data frame handling\nI0817 23:05:13.584890    4848 log.go:172] (0x4000bd8000) (1) Data frame sent\nI0817 23:05:13.585595    4848 log.go:172] (0x4000acc000) (0x4000bd8000) Stream removed, broadcasting: 1\nI0817 23:05:13.587018    4848 log.go:172] (0x4000acc000) Go away received\nI0817 23:05:13.589967    4848 log.go:172] (0x4000acc000) (0x4000bd8000) Stream removed, broadcasting: 1\nI0817 23:05:13.590432    4848 log.go:172] (0x4000acc000) (0x4000aa8000) Stream removed, broadcasting: 3\nI0817 23:05:13.590607    4848 log.go:172] (0x4000acc000) (0x4000bd80a0) Stream removed, broadcasting: 5\n"
Aug 17 23:05:13.601: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 17 23:05:13.602: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 17 23:05:13.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:05:15.042: INFO: stderr: "I0817 23:05:14.948221    4872 log.go:172] (0x400086c000) (0x40006806e0) Create stream\nI0817 23:05:14.950977    4872 log.go:172] (0x400086c000) (0x40006806e0) Stream added, broadcasting: 1\nI0817 23:05:14.961577    4872 log.go:172] (0x400086c000) Reply frame received for 1\nI0817 23:05:14.962106    4872 log.go:172] (0x400086c000) (0x40004f94a0) Create stream\nI0817 23:05:14.962162    4872 log.go:172] (0x400086c000) (0x40004f94a0) Stream added, broadcasting: 3\nI0817 23:05:14.963576    4872 log.go:172] (0x400086c000) Reply frame received for 3\nI0817 23:05:14.963893    4872 log.go:172] (0x400086c000) (0x4000930000) Create stream\nI0817 23:05:14.963968    4872 log.go:172] (0x400086c000) (0x4000930000) Stream added, broadcasting: 5\nI0817 23:05:14.965336    4872 log.go:172] (0x400086c000) Reply frame received for 5\nI0817 23:05:15.022405    4872 log.go:172] (0x400086c000) Data frame received for 3\nI0817 23:05:15.022564    4872 log.go:172] (0x400086c000) Data frame received for 5\nI0817 23:05:15.022713    4872 log.go:172] (0x400086c000) Data frame received for 1\nI0817 23:05:15.022894    4872 log.go:172] (0x40004f94a0) (3) Data frame handling\nI0817 23:05:15.023060    4872 log.go:172] (0x4000930000) (5) Data frame handling\nI0817 23:05:15.023307    4872 log.go:172] (0x40006806e0) (1) Data frame handling\nI0817 23:05:15.024654    4872 log.go:172] (0x40006806e0) (1) Data frame sent\nI0817 23:05:15.024934    4872 log.go:172] (0x4000930000) (5) Data frame sent\nI0817 23:05:15.025284    4872 log.go:172] (0x40004f94a0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0817 23:05:15.025450    4872 log.go:172] (0x400086c000) Data frame received for 3\nI0817 23:05:15.025735    4872 log.go:172] (0x40004f94a0) (3) Data frame handling\nI0817 23:05:15.025842    4872 log.go:172] (0x400086c000) Data frame received for 5\nI0817 23:05:15.025960    4872 log.go:172] (0x4000930000) (5) Data frame handling\nI0817 23:05:15.027737    4872 log.go:172] (0x400086c000) (0x40006806e0) Stream removed, broadcasting: 1\nI0817 23:05:15.030201    4872 log.go:172] (0x400086c000) Go away received\nI0817 23:05:15.033683    4872 log.go:172] (0x400086c000) (0x40006806e0) Stream removed, broadcasting: 1\nI0817 23:05:15.033970    4872 log.go:172] (0x400086c000) (0x40004f94a0) Stream removed, broadcasting: 3\nI0817 23:05:15.034167    4872 log.go:172] (0x400086c000) (0x4000930000) Stream removed, broadcasting: 5\n"
Aug 17 23:05:15.044: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 17 23:05:15.044: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 17 23:05:15.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:05:17.183: INFO: rc: 1
Aug 17 23:05:17.184: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 17 23:05:27.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:05:28.418: INFO: rc: 1
Aug 17 23:05:28.419: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:05:38.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:05:39.679: INFO: rc: 1
Aug 17 23:05:39.679: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:05:49.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:05:50.962: INFO: rc: 1
Aug 17 23:05:50.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:06:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:06:02.208: INFO: rc: 1
Aug 17 23:06:02.208: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:06:12.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:06:13.434: INFO: rc: 1
Aug 17 23:06:13.434: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:06:23.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:06:24.682: INFO: rc: 1
Aug 17 23:06:24.682: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:06:34.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:06:36.772: INFO: rc: 1
Aug 17 23:06:36.772: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:06:46.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:06:48.479: INFO: rc: 1
Aug 17 23:06:48.479: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:06:58.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:06:59.748: INFO: rc: 1
Aug 17 23:06:59.749: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:07:09.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:07:10.990: INFO: rc: 1
Aug 17 23:07:10.990: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:07:20.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:07:22.479: INFO: rc: 1
Aug 17 23:07:22.479: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:07:32.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:07:33.698: INFO: rc: 1
Aug 17 23:07:33.699: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:07:43.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:07:44.959: INFO: rc: 1
Aug 17 23:07:44.959: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:07:54.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:07:56.346: INFO: rc: 1
Aug 17 23:07:56.346: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:08:06.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:08:07.582: INFO: rc: 1
Aug 17 23:08:07.582: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:08:17.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:08:18.840: INFO: rc: 1
Aug 17 23:08:18.840: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:08:28.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:08:30.054: INFO: rc: 1
Aug 17 23:08:30.054: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:08:40.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:08:41.281: INFO: rc: 1
Aug 17 23:08:41.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:08:51.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:08:52.550: INFO: rc: 1
Aug 17 23:08:52.550: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:09:02.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:09:03.771: INFO: rc: 1
Aug 17 23:09:03.771: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:09:13.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:09:15.025: INFO: rc: 1
Aug 17 23:09:15.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:09:25.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:09:26.282: INFO: rc: 1
Aug 17 23:09:26.282: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:09:36.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:09:40.375: INFO: rc: 1
Aug 17 23:09:40.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:09:50.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:09:51.679: INFO: rc: 1
Aug 17 23:09:51.679: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:10:01.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:10:02.938: INFO: rc: 1
Aug 17 23:10:02.938: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:10:12.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:10:14.161: INFO: rc: 1
Aug 17 23:10:14.162: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 17 23:10:24.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 17 23:10:25.369: INFO: rc: 1
Aug 17 23:10:25.370: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Aug 17 23:10:25.370: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 17 23:10:25.388: INFO: Deleting all statefulset in ns statefulset-7555
Aug 17 23:10:25.392: INFO: Scaling statefulset ss to 0
Aug 17 23:10:25.402: INFO: Waiting for statefulset status.replicas updated to 0
Aug 17 23:10:25.405: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:10:25.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7555" for this suite.

• [SLOW TEST:391.610 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":229,"skipped":3561,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:10:25.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 23:10:25.546: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6875cb0-5bf6-4a4d-b8cd-4a93712931c4" in namespace "projected-6021" to be "success or failure"
Aug 17 23:10:25.630: INFO: Pod "downwardapi-volume-f6875cb0-5bf6-4a4d-b8cd-4a93712931c4": Phase="Pending", Reason="", readiness=false. Elapsed: 83.791518ms
Aug 17 23:10:27.637: INFO: Pod "downwardapi-volume-f6875cb0-5bf6-4a4d-b8cd-4a93712931c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091129435s
Aug 17 23:10:29.642: INFO: Pod "downwardapi-volume-f6875cb0-5bf6-4a4d-b8cd-4a93712931c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096629093s
STEP: Saw pod success
Aug 17 23:10:29.643: INFO: Pod "downwardapi-volume-f6875cb0-5bf6-4a4d-b8cd-4a93712931c4" satisfied condition "success or failure"
Aug 17 23:10:29.646: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f6875cb0-5bf6-4a4d-b8cd-4a93712931c4 container client-container: 
STEP: delete the pod
Aug 17 23:10:29.704: INFO: Waiting for pod downwardapi-volume-f6875cb0-5bf6-4a4d-b8cd-4a93712931c4 to disappear
Aug 17 23:10:29.749: INFO: Pod downwardapi-volume-f6875cb0-5bf6-4a4d-b8cd-4a93712931c4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:10:29.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6021" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3568,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:10:29.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 17 23:10:30.030: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:10:37.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-532" for this suite.

• [SLOW TEST:7.856 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":231,"skipped":3602,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:10:37.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 23:10:37.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f2e7601-9e56-47aa-a811-6629426ca809" in namespace "downward-api-6976" to be "success or failure"
Aug 17 23:10:38.026: INFO: Pod "downwardapi-volume-3f2e7601-9e56-47aa-a811-6629426ca809": Phase="Pending", Reason="", readiness=false. Elapsed: 197.478788ms
Aug 17 23:10:40.032: INFO: Pod "downwardapi-volume-3f2e7601-9e56-47aa-a811-6629426ca809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203464253s
Aug 17 23:10:42.040: INFO: Pod "downwardapi-volume-3f2e7601-9e56-47aa-a811-6629426ca809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2110297s
STEP: Saw pod success
Aug 17 23:10:42.040: INFO: Pod "downwardapi-volume-3f2e7601-9e56-47aa-a811-6629426ca809" satisfied condition "success or failure"
Aug 17 23:10:42.045: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3f2e7601-9e56-47aa-a811-6629426ca809 container client-container: 
STEP: delete the pod
Aug 17 23:10:42.074: INFO: Waiting for pod downwardapi-volume-3f2e7601-9e56-47aa-a811-6629426ca809 to disappear
Aug 17 23:10:42.096: INFO: Pod downwardapi-volume-3f2e7601-9e56-47aa-a811-6629426ca809 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:10:42.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6976" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3671,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:10:42.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 23:10:44.329: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 23:10:46.849: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302644, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302644, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302644, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302644, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 23:10:49.894: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:10:50.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-49" for this suite.
STEP: Destroying namespace "webhook-49-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.015 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":233,"skipped":3725,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:10:50.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:11:27.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1476" for this suite.

• [SLOW TEST:37.722 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3728,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:11:27.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 23:11:28.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16f433e3-8fdd-4e06-8014-dcb53ef9e96e" in namespace "projected-8828" to be "success or failure"
Aug 17 23:11:28.596: INFO: Pod "downwardapi-volume-16f433e3-8fdd-4e06-8014-dcb53ef9e96e": Phase="Pending", Reason="", readiness=false. Elapsed: 55.570988ms
Aug 17 23:11:30.622: INFO: Pod "downwardapi-volume-16f433e3-8fdd-4e06-8014-dcb53ef9e96e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082458005s
Aug 17 23:11:32.913: INFO: Pod "downwardapi-volume-16f433e3-8fdd-4e06-8014-dcb53ef9e96e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372927539s
Aug 17 23:11:34.920: INFO: Pod "downwardapi-volume-16f433e3-8fdd-4e06-8014-dcb53ef9e96e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.38006235s
STEP: Saw pod success
Aug 17 23:11:34.921: INFO: Pod "downwardapi-volume-16f433e3-8fdd-4e06-8014-dcb53ef9e96e" satisfied condition "success or failure"
Aug 17 23:11:34.925: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-16f433e3-8fdd-4e06-8014-dcb53ef9e96e container client-container: 
STEP: delete the pod
Aug 17 23:11:34.957: INFO: Waiting for pod downwardapi-volume-16f433e3-8fdd-4e06-8014-dcb53ef9e96e to disappear
Aug 17 23:11:35.055: INFO: Pod downwardapi-volume-16f433e3-8fdd-4e06-8014-dcb53ef9e96e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:11:35.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8828" for this suite.

• [SLOW TEST:7.213 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3747,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:11:35.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:11:35.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-9492" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":236,"skipped":3786,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:11:35.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8108
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 17 23:11:35.939: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 17 23:12:06.441: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.128:8080/dial?request=hostname&protocol=udp&host=10.244.2.127&port=8081&tries=1'] Namespace:pod-network-test-8108 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 23:12:06.441: INFO: >>> kubeConfig: /root/.kube/config
I0817 23:12:06.494733       7 log.go:172] (0x4001fb8420) (0x400102f540) Create stream
I0817 23:12:06.494901       7 log.go:172] (0x4001fb8420) (0x400102f540) Stream added, broadcasting: 1
I0817 23:12:06.497750       7 log.go:172] (0x4001fb8420) Reply frame received for 1
I0817 23:12:06.497884       7 log.go:172] (0x4001fb8420) (0x400102f680) Create stream
I0817 23:12:06.497946       7 log.go:172] (0x4001fb8420) (0x400102f680) Stream added, broadcasting: 3
I0817 23:12:06.499084       7 log.go:172] (0x4001fb8420) Reply frame received for 3
I0817 23:12:06.499206       7 log.go:172] (0x4001fb8420) (0x400102f900) Create stream
I0817 23:12:06.499274       7 log.go:172] (0x4001fb8420) (0x400102f900) Stream added, broadcasting: 5
I0817 23:12:06.500351       7 log.go:172] (0x4001fb8420) Reply frame received for 5
I0817 23:12:06.571340       7 log.go:172] (0x4001fb8420) Data frame received for 3
I0817 23:12:06.571492       7 log.go:172] (0x400102f680) (3) Data frame handling
I0817 23:12:06.571595       7 log.go:172] (0x400102f680) (3) Data frame sent
I0817 23:12:06.571683       7 log.go:172] (0x4001fb8420) Data frame received for 3
I0817 23:12:06.571777       7 log.go:172] (0x400102f680) (3) Data frame handling
I0817 23:12:06.572054       7 log.go:172] (0x4001fb8420) Data frame received for 5
I0817 23:12:06.572185       7 log.go:172] (0x400102f900) (5) Data frame handling
I0817 23:12:06.573546       7 log.go:172] (0x4001fb8420) Data frame received for 1
I0817 23:12:06.573746       7 log.go:172] (0x400102f540) (1) Data frame handling
I0817 23:12:06.573839       7 log.go:172] (0x400102f540) (1) Data frame sent
I0817 23:12:06.573910       7 log.go:172] (0x4001fb8420) (0x400102f540) Stream removed, broadcasting: 1
I0817 23:12:06.573994       7 log.go:172] (0x4001fb8420) Go away received
I0817 23:12:06.574199       7 log.go:172] (0x4001fb8420) (0x400102f540) Stream removed, broadcasting: 1
I0817 23:12:06.574291       7 log.go:172] (0x4001fb8420) (0x400102f680) Stream removed, broadcasting: 3
I0817 23:12:06.574372       7 log.go:172] (0x4001fb8420) (0x400102f900) Stream removed, broadcasting: 5
Aug 17 23:12:06.575: INFO: Waiting for responses: map[]
Aug 17 23:12:06.580: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.128:8080/dial?request=hostname&protocol=udp&host=10.244.1.180&port=8081&tries=1'] Namespace:pod-network-test-8108 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 23:12:06.580: INFO: >>> kubeConfig: /root/.kube/config
I0817 23:12:06.642417       7 log.go:172] (0x4002e582c0) (0x4000c24140) Create stream
I0817 23:12:06.642559       7 log.go:172] (0x4002e582c0) (0x4000c24140) Stream added, broadcasting: 1
I0817 23:12:06.653465       7 log.go:172] (0x4002e582c0) Reply frame received for 1
I0817 23:12:06.653634       7 log.go:172] (0x4002e582c0) (0x4001536000) Create stream
I0817 23:12:06.653694       7 log.go:172] (0x4002e582c0) (0x4001536000) Stream added, broadcasting: 3
I0817 23:12:06.656479       7 log.go:172] (0x4002e582c0) Reply frame received for 3
I0817 23:12:06.656651       7 log.go:172] (0x4002e582c0) (0x40015361e0) Create stream
I0817 23:12:06.656715       7 log.go:172] (0x4002e582c0) (0x40015361e0) Stream added, broadcasting: 5
I0817 23:12:06.658194       7 log.go:172] (0x4002e582c0) Reply frame received for 5
I0817 23:12:06.749648       7 log.go:172] (0x4002e582c0) Data frame received for 3
I0817 23:12:06.749868       7 log.go:172] (0x4001536000) (3) Data frame handling
I0817 23:12:06.750063       7 log.go:172] (0x4001536000) (3) Data frame sent
I0817 23:12:06.750462       7 log.go:172] (0x4002e582c0) Data frame received for 3
I0817 23:12:06.750592       7 log.go:172] (0x4001536000) (3) Data frame handling
I0817 23:12:06.750693       7 log.go:172] (0x4002e582c0) Data frame received for 5
I0817 23:12:06.750826       7 log.go:172] (0x40015361e0) (5) Data frame handling
I0817 23:12:06.752473       7 log.go:172] (0x4002e582c0) Data frame received for 1
I0817 23:12:06.752571       7 log.go:172] (0x4000c24140) (1) Data frame handling
I0817 23:12:06.752665       7 log.go:172] (0x4000c24140) (1) Data frame sent
I0817 23:12:06.752879       7 log.go:172] (0x4002e582c0) (0x4000c24140) Stream removed, broadcasting: 1
I0817 23:12:06.752984       7 log.go:172] (0x4002e582c0) Go away received
I0817 23:12:06.753538       7 log.go:172] (0x4002e582c0) (0x4000c24140) Stream removed, broadcasting: 1
I0817 23:12:06.753613       7 log.go:172] (0x4002e582c0) (0x4001536000) Stream removed, broadcasting: 3
I0817 23:12:06.753668       7 log.go:172] (0x4002e582c0) (0x40015361e0) Stream removed, broadcasting: 5
Aug 17 23:12:06.753: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:12:06.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8108" for this suite.

• [SLOW TEST:30.968 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3795,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:12:06.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 17 23:12:10.601: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 17 23:12:12.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:12:14.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:12:16.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733302730, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 23:12:19.694: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:12:19.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:12:20.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8691" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:14.213 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":238,"skipped":3804,"failed":0}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:12:20.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-1589
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 17 23:12:21.150: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 17 23:12:45.319: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.130:8080/dial?request=hostname&protocol=http&host=10.244.2.129&port=8080&tries=1'] Namespace:pod-network-test-1589 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 23:12:45.319: INFO: >>> kubeConfig: /root/.kube/config
I0817 23:12:45.378727       7 log.go:172] (0x4002e58a50) (0x40017acb40) Create stream
I0817 23:12:45.378879       7 log.go:172] (0x4002e58a50) (0x40017acb40) Stream added, broadcasting: 1
I0817 23:12:45.385083       7 log.go:172] (0x4002e58a50) Reply frame received for 1
I0817 23:12:45.385272       7 log.go:172] (0x4002e58a50) (0x40017acbe0) Create stream
I0817 23:12:45.385365       7 log.go:172] (0x4002e58a50) (0x40017acbe0) Stream added, broadcasting: 3
I0817 23:12:45.388383       7 log.go:172] (0x4002e58a50) Reply frame received for 3
I0817 23:12:45.388523       7 log.go:172] (0x4002e58a50) (0x4002f0d4a0) Create stream
I0817 23:12:45.388599       7 log.go:172] (0x4002e58a50) (0x4002f0d4a0) Stream added, broadcasting: 5
I0817 23:12:45.390518       7 log.go:172] (0x4002e58a50) Reply frame received for 5
I0817 23:12:45.452660       7 log.go:172] (0x4002e58a50) Data frame received for 3
I0817 23:12:45.452829       7 log.go:172] (0x40017acbe0) (3) Data frame handling
I0817 23:12:45.452907       7 log.go:172] (0x40017acbe0) (3) Data frame sent
I0817 23:12:45.453139       7 log.go:172] (0x4002e58a50) Data frame received for 3
I0817 23:12:45.453241       7 log.go:172] (0x40017acbe0) (3) Data frame handling
I0817 23:12:45.453397       7 log.go:172] (0x4002e58a50) Data frame received for 5
I0817 23:12:45.453504       7 log.go:172] (0x4002f0d4a0) (5) Data frame handling
I0817 23:12:45.454707       7 log.go:172] (0x4002e58a50) Data frame received for 1
I0817 23:12:45.454757       7 log.go:172] (0x40017acb40) (1) Data frame handling
I0817 23:12:45.454809       7 log.go:172] (0x40017acb40) (1) Data frame sent
I0817 23:12:45.454869       7 log.go:172] (0x4002e58a50) (0x40017acb40) Stream removed, broadcasting: 1
I0817 23:12:45.454933       7 log.go:172] (0x4002e58a50) Go away received
I0817 23:12:45.455267       7 log.go:172] (0x4002e58a50) (0x40017acb40) Stream removed, broadcasting: 1
I0817 23:12:45.455369       7 log.go:172] (0x4002e58a50) (0x40017acbe0) Stream removed, broadcasting: 3
I0817 23:12:45.455466       7 log.go:172] (0x4002e58a50) (0x4002f0d4a0) Stream removed, broadcasting: 5
Aug 17 23:12:45.455: INFO: Waiting for responses: map[]
Aug 17 23:12:45.459: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.130:8080/dial?request=hostname&protocol=http&host=10.244.1.182&port=8080&tries=1'] Namespace:pod-network-test-1589 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 17 23:12:45.459: INFO: >>> kubeConfig: /root/.kube/config
I0817 23:12:45.507008       7 log.go:172] (0x4002e591e0) (0x40017ad2c0) Create stream
I0817 23:12:45.507142       7 log.go:172] (0x4002e591e0) (0x40017ad2c0) Stream added, broadcasting: 1
I0817 23:12:45.510275       7 log.go:172] (0x4002e591e0) Reply frame received for 1
I0817 23:12:45.510473       7 log.go:172] (0x4002e591e0) (0x400102fb80) Create stream
I0817 23:12:45.510555       7 log.go:172] (0x4002e591e0) (0x400102fb80) Stream added, broadcasting: 3
I0817 23:12:45.512166       7 log.go:172] (0x4002e591e0) Reply frame received for 3
I0817 23:12:45.512303       7 log.go:172] (0x4002e591e0) (0x4001b5e000) Create stream
I0817 23:12:45.512375       7 log.go:172] (0x4002e591e0) (0x4001b5e000) Stream added, broadcasting: 5
I0817 23:12:45.513734       7 log.go:172] (0x4002e591e0) Reply frame received for 5
I0817 23:12:45.574960       7 log.go:172] (0x4002e591e0) Data frame received for 3
I0817 23:12:45.575209       7 log.go:172] (0x400102fb80) (3) Data frame handling
I0817 23:12:45.575369       7 log.go:172] (0x400102fb80) (3) Data frame sent
I0817 23:12:45.575526       7 log.go:172] (0x4002e591e0) Data frame received for 5
I0817 23:12:45.575642       7 log.go:172] (0x4002e591e0) Data frame received for 3
I0817 23:12:45.575782       7 log.go:172] (0x400102fb80) (3) Data frame handling
I0817 23:12:45.575843       7 log.go:172] (0x4001b5e000) (5) Data frame handling
I0817 23:12:45.577035       7 log.go:172] (0x4002e591e0) Data frame received for 1
I0817 23:12:45.577123       7 log.go:172] (0x40017ad2c0) (1) Data frame handling
I0817 23:12:45.577234       7 log.go:172] (0x40017ad2c0) (1) Data frame sent
I0817 23:12:45.577349       7 log.go:172] (0x4002e591e0) (0x40017ad2c0) Stream removed, broadcasting: 1
I0817 23:12:45.577464       7 log.go:172] (0x4002e591e0) Go away received
I0817 23:12:45.577618       7 log.go:172] (0x4002e591e0) (0x40017ad2c0) Stream removed, broadcasting: 1
I0817 23:12:45.577705       7 log.go:172] (0x4002e591e0) (0x400102fb80) Stream removed, broadcasting: 3
I0817 23:12:45.577787       7 log.go:172] (0x4002e591e0) (0x4001b5e000) Stream removed, broadcasting: 5
Aug 17 23:12:45.577: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:12:45.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1589" for this suite.

• [SLOW TEST:24.608 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3809,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:12:45.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 23:12:45.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1174e397-f774-4dbb-9c69-c6e7a5b86e99" in namespace "projected-1990" to be "success or failure"
Aug 17 23:12:45.824: INFO: Pod "downwardapi-volume-1174e397-f774-4dbb-9c69-c6e7a5b86e99": Phase="Pending", Reason="", readiness=false. Elapsed: 34.203968ms
Aug 17 23:12:47.985: INFO: Pod "downwardapi-volume-1174e397-f774-4dbb-9c69-c6e7a5b86e99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194773942s
Aug 17 23:12:49.991: INFO: Pod "downwardapi-volume-1174e397-f774-4dbb-9c69-c6e7a5b86e99": Phase="Running", Reason="", readiness=true. Elapsed: 4.201015351s
Aug 17 23:12:52.017: INFO: Pod "downwardapi-volume-1174e397-f774-4dbb-9c69-c6e7a5b86e99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.226670194s
STEP: Saw pod success
Aug 17 23:12:52.017: INFO: Pod "downwardapi-volume-1174e397-f774-4dbb-9c69-c6e7a5b86e99" satisfied condition "success or failure"
Aug 17 23:12:52.326: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1174e397-f774-4dbb-9c69-c6e7a5b86e99 container client-container: 
STEP: delete the pod
Aug 17 23:12:52.579: INFO: Waiting for pod downwardapi-volume-1174e397-f774-4dbb-9c69-c6e7a5b86e99 to disappear
Aug 17 23:12:52.697: INFO: Pod downwardapi-volume-1174e397-f774-4dbb-9c69-c6e7a5b86e99 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:12:52.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1990" for this suite.

• [SLOW TEST:7.205 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3833,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:12:52.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:12:53.334: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 17 23:12:53.365: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:12:53.410: INFO: Number of nodes with available pods: 0
Aug 17 23:12:53.410: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:12:54.446: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:12:54.451: INFO: Number of nodes with available pods: 0
Aug 17 23:12:54.452: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:12:55.605: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:12:55.611: INFO: Number of nodes with available pods: 0
Aug 17 23:12:55.611: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:12:56.582: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:12:56.621: INFO: Number of nodes with available pods: 0
Aug 17 23:12:56.621: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:12:57.420: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:12:57.425: INFO: Number of nodes with available pods: 0
Aug 17 23:12:57.425: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:12:58.416: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:12:58.430: INFO: Number of nodes with available pods: 2
Aug 17 23:12:58.430: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 17 23:12:58.532: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:12:58.532: INFO: Wrong image for pod: daemon-set-v59kp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:12:58.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:12:59.545: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:12:59.545: INFO: Wrong image for pod: daemon-set-v59kp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:12:59.553: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:00.566: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:00.566: INFO: Wrong image for pod: daemon-set-v59kp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:00.918: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:01.584: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:01.584: INFO: Wrong image for pod: daemon-set-v59kp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:01.791: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:02.741: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:02.741: INFO: Wrong image for pod: daemon-set-v59kp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:02.741: INFO: Pod daemon-set-v59kp is not available
Aug 17 23:13:02.814: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:03.866: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:03.866: INFO: Wrong image for pod: daemon-set-v59kp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:03.866: INFO: Pod daemon-set-v59kp is not available
Aug 17 23:13:03.875: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:04.596: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:04.858: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:05.547: INFO: Pod daemon-set-4dh4j is not available
Aug 17 23:13:05.547: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:05.556: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:06.621: INFO: Pod daemon-set-4dh4j is not available
Aug 17 23:13:06.621: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:06.774: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:07.693: INFO: Pod daemon-set-4dh4j is not available
Aug 17 23:13:07.693: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:07.701: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:08.554: INFO: Pod daemon-set-4dh4j is not available
Aug 17 23:13:08.554: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:08.562: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:09.579: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:09.589: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:10.547: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:10.554: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:11.554: INFO: Wrong image for pod: daemon-set-5k7c8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 17 23:13:11.554: INFO: Pod daemon-set-5k7c8 is not available
Aug 17 23:13:11.563: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:12.544: INFO: Pod daemon-set-n4b2h is not available
Aug 17 23:13:12.567: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 17 23:13:12.572: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:12.575: INFO: Number of nodes with available pods: 1
Aug 17 23:13:12.575: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:13:13.628: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:13.632: INFO: Number of nodes with available pods: 1
Aug 17 23:13:13.632: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:13:14.605: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:14.612: INFO: Number of nodes with available pods: 1
Aug 17 23:13:14.612: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:13:15.697: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:15.895: INFO: Number of nodes with available pods: 1
Aug 17 23:13:15.896: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:13:16.583: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:16.588: INFO: Number of nodes with available pods: 1
Aug 17 23:13:16.588: INFO: Node jerma-worker is running more than one daemon pod
Aug 17 23:13:17.583: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 17 23:13:17.588: INFO: Number of nodes with available pods: 2
Aug 17 23:13:17.588: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5432, will wait for the garbage collector to delete the pods
Aug 17 23:13:17.674: INFO: Deleting DaemonSet.extensions daemon-set took: 5.282323ms
Aug 17 23:13:18.075: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.722015ms
Aug 17 23:13:32.367: INFO: Number of nodes with available pods: 0
Aug 17 23:13:32.367: INFO: Number of running nodes: 0, number of available pods: 0
Aug 17 23:13:32.422: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5432/daemonsets","resourceVersion":"899553"},"items":null}

Aug 17 23:13:32.541: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5432/pods","resourceVersion":"899555"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:13:32.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5432" for this suite.

• [SLOW TEST:39.768 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":241,"skipped":3835,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:13:32.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-40c2d61f-df33-429f-8935-54ccb618e8a3 in namespace container-probe-1678
Aug 17 23:13:36.875: INFO: Started pod test-webserver-40c2d61f-df33-429f-8935-54ccb618e8a3 in namespace container-probe-1678
STEP: checking the pod's current state and verifying that restartCount is present
Aug 17 23:13:36.881: INFO: Initial restart count of pod test-webserver-40c2d61f-df33-429f-8935-54ccb618e8a3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:17:38.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1678" for this suite.

• [SLOW TEST:246.194 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3839,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:17:38.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-nml2n in namespace proxy-8395
I0817 23:17:39.102367       7 runners.go:189] Created replication controller with name: proxy-service-nml2n, namespace: proxy-8395, replica count: 1
I0817 23:17:40.153896       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 23:17:41.154620       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 23:17:42.155396       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0817 23:17:43.156171       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0817 23:17:44.157096       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0817 23:17:45.157861       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0817 23:17:46.158434       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0817 23:17:47.159028       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0817 23:17:48.159804       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0817 23:17:49.160511       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0817 23:17:50.161233       7 runners.go:189] proxy-service-nml2n Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 17 23:17:50.174: INFO: setup took 11.326323358s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 17 23:17:50.186: INFO: (0) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 10.942017ms)
Aug 17 23:17:50.186: INFO: (0) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 10.756203ms)
Aug 17 23:17:50.187: INFO: (0) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 12.007936ms)
Aug 17 23:17:50.187: INFO: (0) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 12.028615ms)
Aug 17 23:17:50.187: INFO: (0) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 12.586601ms)
Aug 17 23:17:50.187: INFO: (0) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 12.343441ms)
Aug 17 23:17:50.188: INFO: (0) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 13.041676ms)
Aug 17 23:17:50.192: INFO: (0) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 17.166367ms)
Aug 17 23:17:50.192: INFO: (0) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 17.259206ms)
Aug 17 23:17:50.193: INFO: (0) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 17.622189ms)
Aug 17 23:17:50.193: INFO: (0) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 18.229688ms)
Aug 17 23:17:50.193: INFO: (0) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 18.366373ms)
Aug 17 23:17:50.193: INFO: (0) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 18.317903ms)
Aug 17 23:17:50.195: INFO: (0) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test<... (200; 5.446669ms)
Aug 17 23:17:50.203: INFO: (1) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 6.17803ms)
Aug 17 23:17:50.203: INFO: (1) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 6.921068ms)
Aug 17 23:17:50.203: INFO: (1) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 6.666665ms)
Aug 17 23:17:50.204: INFO: (1) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 7.326609ms)
Aug 17 23:17:50.204: INFO: (1) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 7.693184ms)
Aug 17 23:17:50.204: INFO: (1) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 7.221758ms)
Aug 17 23:17:50.204: INFO: (1) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 7.527638ms)
Aug 17 23:17:50.204: INFO: (1) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 7.515174ms)
Aug 17 23:17:50.205: INFO: (1) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 7.901431ms)
Aug 17 23:17:50.205: INFO: (1) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 8.161257ms)
Aug 17 23:17:50.205: INFO: (1) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test<... (200; 4.641798ms)
Aug 17 23:17:50.210: INFO: (2) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 4.146241ms)
Aug 17 23:17:50.210: INFO: (2) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 4.822195ms)
Aug 17 23:17:50.210: INFO: (2) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 4.987144ms)
Aug 17 23:17:50.210: INFO: (2) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 4.943759ms)
Aug 17 23:17:50.210: INFO: (2) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 5.141393ms)
Aug 17 23:17:50.213: INFO: (2) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 8.019952ms)
Aug 17 23:17:50.214: INFO: (2) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 8.377754ms)
Aug 17 23:17:50.214: INFO: (2) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 8.324557ms)
Aug 17 23:17:50.214: INFO: (2) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: ... (200; 8.242623ms)
Aug 17 23:17:50.214: INFO: (2) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 8.367345ms)
Aug 17 23:17:50.214: INFO: (2) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 8.604386ms)
Aug 17 23:17:50.214: INFO: (2) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 8.725609ms)
Aug 17 23:17:50.214: INFO: (2) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 8.800373ms)
Aug 17 23:17:50.219: INFO: (3) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 4.847521ms)
Aug 17 23:17:50.219: INFO: (3) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 5.121455ms)
Aug 17 23:17:50.219: INFO: (3) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.083559ms)
Aug 17 23:17:50.219: INFO: (3) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 5.331458ms)
Aug 17 23:17:50.220: INFO: (3) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test<... (200; 8.701495ms)
Aug 17 23:17:50.223: INFO: (3) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 8.627695ms)
Aug 17 23:17:50.223: INFO: (3) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 8.914714ms)
Aug 17 23:17:50.223: INFO: (3) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 9.31177ms)
Aug 17 23:17:50.223: INFO: (3) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 9.080167ms)
Aug 17 23:17:50.224: INFO: (3) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 9.116493ms)
Aug 17 23:17:50.224: INFO: (3) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 10.320039ms)
Aug 17 23:17:50.228: INFO: (4) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 3.27522ms)
Aug 17 23:17:50.230: INFO: (4) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 4.823768ms)
Aug 17 23:17:50.230: INFO: (4) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 4.841735ms)
Aug 17 23:17:50.230: INFO: (4) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 4.926573ms)
Aug 17 23:17:50.230: INFO: (4) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 4.968408ms)
Aug 17 23:17:50.230: INFO: (4) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.016754ms)
Aug 17 23:17:50.230: INFO: (4) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test<... (200; 7.608593ms)
Aug 17 23:17:50.233: INFO: (4) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 7.846355ms)
Aug 17 23:17:50.236: INFO: (5) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 3.302616ms)
Aug 17 23:17:50.237: INFO: (5) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 4.119176ms)
Aug 17 23:17:50.237: INFO: (5) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 4.49494ms)
Aug 17 23:17:50.238: INFO: (5) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 5.063713ms)
Aug 17 23:17:50.239: INFO: (5) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 6.439667ms)
Aug 17 23:17:50.240: INFO: (5) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 6.576452ms)
Aug 17 23:17:50.240: INFO: (5) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 6.524757ms)
Aug 17 23:17:50.240: INFO: (5) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 6.792231ms)
Aug 17 23:17:50.240: INFO: (5) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 6.960619ms)
Aug 17 23:17:50.241: INFO: (5) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 7.809387ms)
Aug 17 23:17:50.241: INFO: (5) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 8.35128ms)
Aug 17 23:17:50.241: INFO: (5) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 8.230652ms)
Aug 17 23:17:50.241: INFO: (5) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 8.403832ms)
Aug 17 23:17:50.241: INFO: (5) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 8.112897ms)
Aug 17 23:17:50.241: INFO: (5) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test (200; 8.382474ms)
Aug 17 23:17:50.246: INFO: (6) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 4.150676ms)
Aug 17 23:17:50.246: INFO: (6) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 4.373466ms)
Aug 17 23:17:50.246: INFO: (6) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 4.079675ms)
Aug 17 23:17:50.247: INFO: (6) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 4.641792ms)
Aug 17 23:17:50.247: INFO: (6) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 5.22559ms)
Aug 17 23:17:50.247: INFO: (6) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 5.49679ms)
Aug 17 23:17:50.247: INFO: (6) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: ... (200; 6.738811ms)
Aug 17 23:17:50.249: INFO: (6) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 6.9045ms)
Aug 17 23:17:50.253: INFO: (7) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 3.656624ms)
Aug 17 23:17:50.253: INFO: (7) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 3.682408ms)
Aug 17 23:17:50.256: INFO: (7) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 5.715746ms)
Aug 17 23:17:50.256: INFO: (7) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.392227ms)
Aug 17 23:17:50.256: INFO: (7) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 6.309559ms)
Aug 17 23:17:50.256: INFO: (7) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 5.39929ms)
Aug 17 23:17:50.256: INFO: (7) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.094145ms)
Aug 17 23:17:50.257: INFO: (7) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 5.354972ms)
Aug 17 23:17:50.257: INFO: (7) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 6.239731ms)
Aug 17 23:17:50.257: INFO: (7) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 6.29559ms)
Aug 17 23:17:50.256: INFO: (7) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 6.711802ms)
Aug 17 23:17:50.257: INFO: (7) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 6.647719ms)
Aug 17 23:17:50.257: INFO: (7) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test<... (200; 27.057277ms)
Aug 17 23:17:50.286: INFO: (8) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 27.527051ms)
Aug 17 23:17:50.286: INFO: (8) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 27.591069ms)
Aug 17 23:17:50.286: INFO: (8) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 27.789593ms)
Aug 17 23:17:50.287: INFO: (8) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: ... (200; 28.802333ms)
Aug 17 23:17:50.287: INFO: (8) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 29.000171ms)
Aug 17 23:17:50.288: INFO: (8) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 29.226009ms)
Aug 17 23:17:50.288: INFO: (8) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 29.441529ms)
Aug 17 23:17:50.288: INFO: (8) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 29.615523ms)
Aug 17 23:17:50.288: INFO: (8) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 29.8305ms)
Aug 17 23:17:50.288: INFO: (8) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 29.58731ms)
Aug 17 23:17:50.288: INFO: (8) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 29.881984ms)
Aug 17 23:17:50.289: INFO: (8) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 30.412155ms)
Aug 17 23:17:50.289: INFO: (8) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 30.888312ms)
Aug 17 23:17:50.289: INFO: (8) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 30.759038ms)
Aug 17 23:17:50.295: INFO: (9) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 4.863067ms)
Aug 17 23:17:50.296: INFO: (9) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 5.797954ms)
Aug 17 23:17:50.296: INFO: (9) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.523051ms)
Aug 17 23:17:50.296: INFO: (9) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 5.694328ms)
Aug 17 23:17:50.296: INFO: (9) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 6.774524ms)
Aug 17 23:17:50.296: INFO: (9) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 6.79777ms)
Aug 17 23:17:50.297: INFO: (9) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 7.544307ms)
Aug 17 23:17:50.298: INFO: (9) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 7.824435ms)
Aug 17 23:17:50.298: INFO: (9) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 8.504023ms)
Aug 17 23:17:50.298: INFO: (9) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 8.214427ms)
Aug 17 23:17:50.298: INFO: (9) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 8.171618ms)
Aug 17 23:17:50.298: INFO: (9) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 8.377872ms)
Aug 17 23:17:50.298: INFO: (9) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 8.789735ms)
Aug 17 23:17:50.299: INFO: (9) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: ... (200; 9.190299ms)
Aug 17 23:17:50.308: INFO: (10) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 9.413209ms)
Aug 17 23:17:50.309: INFO: (10) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 9.644646ms)
Aug 17 23:17:50.309: INFO: (10) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 9.818593ms)
Aug 17 23:17:50.309: INFO: (10) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 9.966464ms)
Aug 17 23:17:50.309: INFO: (10) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 10.268734ms)
Aug 17 23:17:50.309: INFO: (10) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 10.024986ms)
Aug 17 23:17:50.309: INFO: (10) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 10.116365ms)
Aug 17 23:17:50.309: INFO: (10) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 10.574414ms)
Aug 17 23:17:50.313: INFO: (11) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 3.416108ms)
Aug 17 23:17:50.315: INFO: (11) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 4.917915ms)
Aug 17 23:17:50.315: INFO: (11) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 5.301762ms)
Aug 17 23:17:50.315: INFO: (11) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 5.179596ms)
Aug 17 23:17:50.315: INFO: (11) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 5.470605ms)
Aug 17 23:17:50.315: INFO: (11) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 5.697906ms)
Aug 17 23:17:50.315: INFO: (11) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.754918ms)
Aug 17 23:17:50.315: INFO: (11) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test<... (200; 4.549862ms)
Aug 17 23:17:50.323: INFO: (12) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 4.958132ms)
Aug 17 23:17:50.323: INFO: (12) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 4.329841ms)
Aug 17 23:17:50.323: INFO: (12) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 4.41547ms)
Aug 17 23:17:50.323: INFO: (12) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 4.066143ms)
Aug 17 23:17:50.324: INFO: (12) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 4.154798ms)
Aug 17 23:17:50.324: INFO: (12) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 3.582491ms)
Aug 17 23:17:50.324: INFO: (12) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 4.19916ms)
Aug 17 23:17:50.325: INFO: (12) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 3.781094ms)
Aug 17 23:17:50.325: INFO: (12) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 4.263053ms)
Aug 17 23:17:50.325: INFO: (12) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 4.471267ms)
Aug 17 23:17:50.325: INFO: (12) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 4.580771ms)
Aug 17 23:17:50.325: INFO: (12) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 4.942379ms)
Aug 17 23:17:50.326: INFO: (12) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 4.974355ms)
Aug 17 23:17:50.326: INFO: (12) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: ... (200; 2.534249ms)
Aug 17 23:17:50.330: INFO: (13) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 3.545461ms)
Aug 17 23:17:50.330: INFO: (13) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 3.554563ms)
Aug 17 23:17:50.332: INFO: (13) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 5.805002ms)
Aug 17 23:17:50.332: INFO: (13) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 5.953685ms)
Aug 17 23:17:50.332: INFO: (13) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: ... (200; 4.051863ms)
Aug 17 23:17:50.337: INFO: (14) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 4.176549ms)
Aug 17 23:17:50.338: INFO: (14) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 4.581833ms)
Aug 17 23:17:50.338: INFO: (14) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 4.482341ms)
Aug 17 23:17:50.338: INFO: (14) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 4.7686ms)
Aug 17 23:17:50.338: INFO: (14) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 4.90392ms)
Aug 17 23:17:50.338: INFO: (14) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 5.19331ms)
Aug 17 23:17:50.338: INFO: (14) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.149602ms)
Aug 17 23:17:50.339: INFO: (14) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 6.076853ms)
Aug 17 23:17:50.339: INFO: (14) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 6.166243ms)
Aug 17 23:17:50.340: INFO: (14) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 6.044746ms)
Aug 17 23:17:50.340: INFO: (14) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 6.216292ms)
Aug 17 23:17:50.340: INFO: (14) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 6.26174ms)
Aug 17 23:17:50.340: INFO: (14) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: ... (200; 5.829734ms)
Aug 17 23:17:50.346: INFO: (15) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 6.400088ms)
Aug 17 23:17:50.346: INFO: (15) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test<... (200; 6.74258ms)
Aug 17 23:17:50.347: INFO: (15) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 6.524026ms)
Aug 17 23:17:50.347: INFO: (15) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 7.178319ms)
Aug 17 23:17:50.347: INFO: (15) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 7.156479ms)
Aug 17 23:17:50.348: INFO: (15) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 7.33216ms)
Aug 17 23:17:50.348: INFO: (15) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 7.742721ms)
Aug 17 23:17:50.348: INFO: (15) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 7.460431ms)
Aug 17 23:17:50.348: INFO: (15) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 7.640046ms)
Aug 17 23:17:50.348: INFO: (15) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 8.020814ms)
Aug 17 23:17:50.348: INFO: (15) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 8.146287ms)
Aug 17 23:17:50.348: INFO: (15) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 7.861934ms)
Aug 17 23:17:50.349: INFO: (15) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 8.156707ms)
Aug 17 23:17:50.352: INFO: (16) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 3.355764ms)
Aug 17 23:17:50.353: INFO: (16) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 4.021441ms)
Aug 17 23:17:50.354: INFO: (16) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 4.869551ms)
Aug 17 23:17:50.354: INFO: (16) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 5.235839ms)
Aug 17 23:17:50.354: INFO: (16) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 5.285605ms)
Aug 17 23:17:50.354: INFO: (16) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 5.549019ms)
Aug 17 23:17:50.354: INFO: (16) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 5.628922ms)
Aug 17 23:17:50.355: INFO: (16) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 5.496101ms)
Aug 17 23:17:50.355: INFO: (16) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 5.856598ms)
Aug 17 23:17:50.355: INFO: (16) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.828706ms)
Aug 17 23:17:50.355: INFO: (16) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 6.081386ms)
Aug 17 23:17:50.355: INFO: (16) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 6.032239ms)
Aug 17 23:17:50.355: INFO: (16) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 6.213205ms)
Aug 17 23:17:50.356: INFO: (16) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 7.009262ms)
Aug 17 23:17:50.356: INFO: (16) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: ... (200; 6.910775ms)
Aug 17 23:17:50.364: INFO: (17) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9/proxy/: test (200; 7.374473ms)
Aug 17 23:17:50.364: INFO: (17) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test<... (200; 8.18179ms)
Aug 17 23:17:50.366: INFO: (17) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname1/proxy/: foo (200; 9.284564ms)
Aug 17 23:17:50.366: INFO: (17) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 9.300425ms)
Aug 17 23:17:50.366: INFO: (17) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 9.027271ms)
Aug 17 23:17:50.366: INFO: (17) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 9.519583ms)
Aug 17 23:17:50.366: INFO: (17) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 9.345982ms)
Aug 17 23:17:50.366: INFO: (17) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 9.611591ms)
Aug 17 23:17:50.367: INFO: (17) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 9.818504ms)
Aug 17 23:17:50.371: INFO: (18) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test (200; 5.44662ms)
Aug 17 23:17:50.372: INFO: (18) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 5.530482ms)
Aug 17 23:17:50.373: INFO: (18) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.411952ms)
Aug 17 23:17:50.373: INFO: (18) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 5.639291ms)
Aug 17 23:17:50.373: INFO: (18) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 5.465282ms)
Aug 17 23:17:50.373: INFO: (18) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 5.60862ms)
Aug 17 23:17:50.373: INFO: (18) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 5.970572ms)
Aug 17 23:17:50.373: INFO: (18) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 5.881269ms)
Aug 17 23:17:50.373: INFO: (18) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 5.82115ms)
Aug 17 23:17:50.376: INFO: (19) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 2.724677ms)
Aug 17 23:17:50.377: INFO: (19) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:443/proxy/: test (200; 4.262499ms)
Aug 17 23:17:50.379: INFO: (19) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname2/proxy/: tls qux (200; 5.588187ms)
Aug 17 23:17:50.379: INFO: (19) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname2/proxy/: bar (200; 5.691417ms)
Aug 17 23:17:50.379: INFO: (19) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:460/proxy/: tls baz (200; 5.760613ms)
Aug 17 23:17:50.380: INFO: (19) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 6.490069ms)
Aug 17 23:17:50.380: INFO: (19) /api/v1/namespaces/proxy-8395/services/https:proxy-service-nml2n:tlsportname1/proxy/: tls baz (200; 6.292457ms)
Aug 17 23:17:50.380: INFO: (19) /api/v1/namespaces/proxy-8395/pods/https:proxy-service-nml2n-nzvd9:462/proxy/: tls qux (200; 6.646021ms)
Aug 17 23:17:50.381: INFO: (19) /api/v1/namespaces/proxy-8395/pods/proxy-service-nml2n-nzvd9:1080/proxy/: test<... (200; 7.96829ms)
Aug 17 23:17:50.381: INFO: (19) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:162/proxy/: bar (200; 7.780792ms)
Aug 17 23:17:50.381: INFO: (19) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:160/proxy/: foo (200; 7.807489ms)
Aug 17 23:17:50.381: INFO: (19) /api/v1/namespaces/proxy-8395/pods/http:proxy-service-nml2n-nzvd9:1080/proxy/: ... (200; 7.956655ms)
Aug 17 23:17:50.382: INFO: (19) /api/v1/namespaces/proxy-8395/services/proxy-service-nml2n:portname1/proxy/: foo (200; 8.376653ms)
Aug 17 23:17:50.382: INFO: (19) /api/v1/namespaces/proxy-8395/services/http:proxy-service-nml2n:portname2/proxy/: bar (200; 8.135319ms)
STEP: deleting ReplicationController proxy-service-nml2n in namespace proxy-8395, will wait for the garbage collector to delete the pods
Aug 17 23:17:50.443: INFO: Deleting ReplicationController proxy-service-nml2n took: 8.241853ms
Aug 17 23:17:50.744: INFO: Terminating ReplicationController proxy-service-nml2n pods took: 300.77632ms
[AfterEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:17:53.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8395" for this suite.

• [SLOW TEST:14.997 seconds]
[sig-network] Proxy
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":243,"skipped":3848,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:17:53.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 17 23:17:53.848: INFO: Waiting up to 5m0s for pod "pod-356a35bc-4f27-433d-a645-e96b1ee94af3" in namespace "emptydir-1080" to be "success or failure"
Aug 17 23:17:53.859: INFO: Pod "pod-356a35bc-4f27-433d-a645-e96b1ee94af3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.616287ms
Aug 17 23:17:55.891: INFO: Pod "pod-356a35bc-4f27-433d-a645-e96b1ee94af3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042204539s
Aug 17 23:17:57.897: INFO: Pod "pod-356a35bc-4f27-433d-a645-e96b1ee94af3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047877687s
STEP: Saw pod success
Aug 17 23:17:57.897: INFO: Pod "pod-356a35bc-4f27-433d-a645-e96b1ee94af3" satisfied condition "success or failure"
Aug 17 23:17:57.901: INFO: Trying to get logs from node jerma-worker2 pod pod-356a35bc-4f27-433d-a645-e96b1ee94af3 container test-container: 
STEP: delete the pod
Aug 17 23:17:57.950: INFO: Waiting for pod pod-356a35bc-4f27-433d-a645-e96b1ee94af3 to disappear
Aug 17 23:17:57.960: INFO: Pod pod-356a35bc-4f27-433d-a645-e96b1ee94af3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:17:57.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1080" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3868,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:17:58.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-p4ml
STEP: Creating a pod to test atomic-volume-subpath
Aug 17 23:17:58.149: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p4ml" in namespace "subpath-4578" to be "success or failure"
Aug 17 23:17:58.168: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Pending", Reason="", readiness=false. Elapsed: 19.297852ms
Aug 17 23:18:00.375: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226408631s
Aug 17 23:18:02.383: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 4.234074738s
Aug 17 23:18:04.390: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 6.240799846s
Aug 17 23:18:06.397: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 8.247654401s
Aug 17 23:18:08.404: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 10.254572936s
Aug 17 23:18:10.410: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 12.261405915s
Aug 17 23:18:12.420: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 14.271276828s
Aug 17 23:18:14.427: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 16.278176727s
Aug 17 23:18:16.433: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 18.283472459s
Aug 17 23:18:18.439: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 20.290027791s
Aug 17 23:18:20.446: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 22.296908856s
Aug 17 23:18:22.641: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Running", Reason="", readiness=true. Elapsed: 24.491978014s
Aug 17 23:18:24.648: INFO: Pod "pod-subpath-test-configmap-p4ml": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.499018929s
STEP: Saw pod success
Aug 17 23:18:24.649: INFO: Pod "pod-subpath-test-configmap-p4ml" satisfied condition "success or failure"
Aug 17 23:18:24.661: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-p4ml container test-container-subpath-configmap-p4ml: 
STEP: delete the pod
Aug 17 23:18:24.711: INFO: Waiting for pod pod-subpath-test-configmap-p4ml to disappear
Aug 17 23:18:24.715: INFO: Pod pod-subpath-test-configmap-p4ml no longer exists
STEP: Deleting pod pod-subpath-test-configmap-p4ml
Aug 17 23:18:24.715: INFO: Deleting pod "pod-subpath-test-configmap-p4ml" in namespace "subpath-4578"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:18:24.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4578" for this suite.

• [SLOW TEST:26.721 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":245,"skipped":3886,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:18:24.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 23:18:28.921: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 23:18:30.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303108, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303108, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303108, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303108, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 23:18:33.976: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:18:34.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7246" for this suite.
STEP: Destroying namespace "webhook-7246-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.892 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":246,"skipped":3910,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:18:34.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 23:18:34.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db218da2-8dde-457a-842b-16545ba67a87" in namespace "downward-api-3446" to be "success or failure"
Aug 17 23:18:34.788: INFO: Pod "downwardapi-volume-db218da2-8dde-457a-842b-16545ba67a87": Phase="Pending", Reason="", readiness=false. Elapsed: 32.685241ms
Aug 17 23:18:36.837: INFO: Pod "downwardapi-volume-db218da2-8dde-457a-842b-16545ba67a87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08206398s
Aug 17 23:18:38.862: INFO: Pod "downwardapi-volume-db218da2-8dde-457a-842b-16545ba67a87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107008982s
STEP: Saw pod success
Aug 17 23:18:38.862: INFO: Pod "downwardapi-volume-db218da2-8dde-457a-842b-16545ba67a87" satisfied condition "success or failure"
Aug 17 23:18:38.867: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-db218da2-8dde-457a-842b-16545ba67a87 container client-container: 
STEP: delete the pod
Aug 17 23:18:38.928: INFO: Waiting for pod downwardapi-volume-db218da2-8dde-457a-842b-16545ba67a87 to disappear
Aug 17 23:18:38.955: INFO: Pod downwardapi-volume-db218da2-8dde-457a-842b-16545ba67a87 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:18:38.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3446" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":3925,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:18:38.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-0c77dea5-eb13-4b6c-b57c-3522c57f8534
STEP: Creating a pod to test consume configMaps
Aug 17 23:18:39.234: INFO: Waiting up to 5m0s for pod "pod-configmaps-50b6e9cd-f955-4e84-81de-00d90fb95d5b" in namespace "configmap-2178" to be "success or failure"
Aug 17 23:18:39.255: INFO: Pod "pod-configmaps-50b6e9cd-f955-4e84-81de-00d90fb95d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.45256ms
Aug 17 23:18:41.262: INFO: Pod "pod-configmaps-50b6e9cd-f955-4e84-81de-00d90fb95d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028472796s
Aug 17 23:18:43.268: INFO: Pod "pod-configmaps-50b6e9cd-f955-4e84-81de-00d90fb95d5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034181743s
STEP: Saw pod success
Aug 17 23:18:43.268: INFO: Pod "pod-configmaps-50b6e9cd-f955-4e84-81de-00d90fb95d5b" satisfied condition "success or failure"
Aug 17 23:18:43.272: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-50b6e9cd-f955-4e84-81de-00d90fb95d5b container configmap-volume-test: 
STEP: delete the pod
Aug 17 23:18:43.293: INFO: Waiting for pod pod-configmaps-50b6e9cd-f955-4e84-81de-00d90fb95d5b to disappear
Aug 17 23:18:43.297: INFO: Pod pod-configmaps-50b6e9cd-f955-4e84-81de-00d90fb95d5b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:18:43.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2178" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":3927,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:18:43.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:18:48.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9208" for this suite.

• [SLOW TEST:5.244 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":249,"skipped":3950,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:18:48.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-7299
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7299 to expose endpoints map[]
Aug 17 23:18:48.783: INFO: successfully validated that service endpoint-test2 in namespace services-7299 exposes endpoints map[] (15.192332ms elapsed)
STEP: Creating pod pod1 in namespace services-7299
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7299 to expose endpoints map[pod1:[80]]
Aug 17 23:18:52.889: INFO: successfully validated that service endpoint-test2 in namespace services-7299 exposes endpoints map[pod1:[80]] (4.092617924s elapsed)
STEP: Creating pod pod2 in namespace services-7299
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7299 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 17 23:18:57.471: INFO: successfully validated that service endpoint-test2 in namespace services-7299 exposes endpoints map[pod1:[80] pod2:[80]] (4.576142319s elapsed)
STEP: Deleting pod pod1 in namespace services-7299
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7299 to expose endpoints map[pod2:[80]]
Aug 17 23:18:57.517: INFO: successfully validated that service endpoint-test2 in namespace services-7299 exposes endpoints map[pod2:[80]] (37.434769ms elapsed)
STEP: Deleting pod pod2 in namespace services-7299
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7299 to expose endpoints map[]
Aug 17 23:18:57.537: INFO: successfully validated that service endpoint-test2 in namespace services-7299 exposes endpoints map[] (15.195194ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:18:57.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7299" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:9.035 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":250,"skipped":3961,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:18:57.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-5bdafd88-216f-456f-a7b3-758925de87f4
STEP: Creating secret with name s-test-opt-upd-9f87bdd1-a8d9-45b9-b45b-7479cc66f48c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5bdafd88-216f-456f-a7b3-758925de87f4
STEP: Updating secret s-test-opt-upd-9f87bdd1-a8d9-45b9-b45b-7479cc66f48c
STEP: Creating secret with name s-test-opt-create-317923f7-6103-4db0-8369-67f6803e856b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:20:27.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8048" for this suite.

• [SLOW TEST:89.934 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4092,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:20:27.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 17 23:20:27.722: INFO: Waiting up to 5m0s for pod "pod-62c52f3b-9d5a-4297-8357-5e4ad13f795e" in namespace "emptydir-2517" to be "success or failure"
Aug 17 23:20:27.744: INFO: Pod "pod-62c52f3b-9d5a-4297-8357-5e4ad13f795e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.737827ms
Aug 17 23:20:29.803: INFO: Pod "pod-62c52f3b-9d5a-4297-8357-5e4ad13f795e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080683074s
Aug 17 23:20:31.827: INFO: Pod "pod-62c52f3b-9d5a-4297-8357-5e4ad13f795e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104105347s
Aug 17 23:20:33.839: INFO: Pod "pod-62c52f3b-9d5a-4297-8357-5e4ad13f795e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116225008s
STEP: Saw pod success
Aug 17 23:20:33.839: INFO: Pod "pod-62c52f3b-9d5a-4297-8357-5e4ad13f795e" satisfied condition "success or failure"
Aug 17 23:20:33.844: INFO: Trying to get logs from node jerma-worker pod pod-62c52f3b-9d5a-4297-8357-5e4ad13f795e container test-container: 
STEP: delete the pod
Aug 17 23:20:33.948: INFO: Waiting for pod pod-62c52f3b-9d5a-4297-8357-5e4ad13f795e to disappear
Aug 17 23:20:33.963: INFO: Pod pod-62c52f3b-9d5a-4297-8357-5e4ad13f795e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:20:33.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2517" for this suite.

• [SLOW TEST:6.879 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4120,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:20:34.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-pkbk
STEP: Creating a pod to test atomic-volume-subpath
Aug 17 23:20:34.817: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pkbk" in namespace "subpath-2995" to be "success or failure"
Aug 17 23:20:34.988: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Pending", Reason="", readiness=false. Elapsed: 171.049532ms
Aug 17 23:20:36.994: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177109692s
Aug 17 23:20:39.001: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 4.18342916s
Aug 17 23:20:41.007: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 6.190156642s
Aug 17 23:20:43.073: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 8.255673299s
Aug 17 23:20:45.079: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 10.261611328s
Aug 17 23:20:47.084: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 12.267376485s
Aug 17 23:20:49.091: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 14.274114403s
Aug 17 23:20:51.137: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 16.319696673s
Aug 17 23:20:53.246: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 18.429219577s
Aug 17 23:20:55.253: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 20.435673545s
Aug 17 23:20:57.257: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Running", Reason="", readiness=true. Elapsed: 22.440092218s
Aug 17 23:20:59.264: INFO: Pod "pod-subpath-test-secret-pkbk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.446491188s
STEP: Saw pod success
Aug 17 23:20:59.264: INFO: Pod "pod-subpath-test-secret-pkbk" satisfied condition "success or failure"
Aug 17 23:20:59.267: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-pkbk container test-container-subpath-secret-pkbk: 
STEP: delete the pod
Aug 17 23:20:59.294: INFO: Waiting for pod pod-subpath-test-secret-pkbk to disappear
Aug 17 23:20:59.323: INFO: Pod pod-subpath-test-secret-pkbk no longer exists
STEP: Deleting pod pod-subpath-test-secret-pkbk
Aug 17 23:20:59.323: INFO: Deleting pod "pod-subpath-test-secret-pkbk" in namespace "subpath-2995"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:20:59.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2995" for this suite.

• [SLOW TEST:24.926 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":253,"skipped":4121,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:20:59.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-r6kt
STEP: Creating a pod to test atomic-volume-subpath
Aug 17 23:20:59.457: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-r6kt" in namespace "subpath-6445" to be "success or failure"
Aug 17 23:20:59.544: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Pending", Reason="", readiness=false. Elapsed: 87.281549ms
Aug 17 23:21:01.569: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111987403s
Aug 17 23:21:03.768: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 4.310655448s
Aug 17 23:21:05.880: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 6.423005033s
Aug 17 23:21:07.887: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 8.429931109s
Aug 17 23:21:09.893: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 10.435738255s
Aug 17 23:21:11.900: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 12.442592096s
Aug 17 23:21:13.916: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 14.459111175s
Aug 17 23:21:15.923: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 16.465617139s
Aug 17 23:21:17.929: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 18.472494177s
Aug 17 23:21:19.935: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 20.478093815s
Aug 17 23:21:21.942: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Running", Reason="", readiness=true. Elapsed: 22.485476395s
Aug 17 23:21:23.948: INFO: Pod "pod-subpath-test-downwardapi-r6kt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.491169671s
STEP: Saw pod success
Aug 17 23:21:23.948: INFO: Pod "pod-subpath-test-downwardapi-r6kt" satisfied condition "success or failure"
Aug 17 23:21:23.953: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-r6kt container test-container-subpath-downwardapi-r6kt: 
STEP: delete the pod
Aug 17 23:21:24.006: INFO: Waiting for pod pod-subpath-test-downwardapi-r6kt to disappear
Aug 17 23:21:24.131: INFO: Pod pod-subpath-test-downwardapi-r6kt no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-r6kt
Aug 17 23:21:24.131: INFO: Deleting pod "pod-subpath-test-downwardapi-r6kt" in namespace "subpath-6445"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:21:24.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6445" for this suite.

• [SLOW TEST:24.804 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":254,"skipped":4163,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:21:24.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 17 23:21:31.421: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:21:32.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8269" for this suite.

• [SLOW TEST:8.370 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":255,"skipped":4175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:21:32.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 23:21:32.711: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b56e595a-c8d8-464f-b47f-56aae015a09f" in namespace "projected-567" to be "success or failure"
Aug 17 23:21:32.881: INFO: Pod "downwardapi-volume-b56e595a-c8d8-464f-b47f-56aae015a09f": Phase="Pending", Reason="", readiness=false. Elapsed: 169.829264ms
Aug 17 23:21:34.891: INFO: Pod "downwardapi-volume-b56e595a-c8d8-464f-b47f-56aae015a09f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179852955s
Aug 17 23:21:36.941: INFO: Pod "downwardapi-volume-b56e595a-c8d8-464f-b47f-56aae015a09f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229761417s
Aug 17 23:21:39.007: INFO: Pod "downwardapi-volume-b56e595a-c8d8-464f-b47f-56aae015a09f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.295455294s
STEP: Saw pod success
Aug 17 23:21:39.007: INFO: Pod "downwardapi-volume-b56e595a-c8d8-464f-b47f-56aae015a09f" satisfied condition "success or failure"
Aug 17 23:21:39.043: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b56e595a-c8d8-464f-b47f-56aae015a09f container client-container: 
STEP: delete the pod
Aug 17 23:21:39.289: INFO: Waiting for pod downwardapi-volume-b56e595a-c8d8-464f-b47f-56aae015a09f to disappear
Aug 17 23:21:39.319: INFO: Pod downwardapi-volume-b56e595a-c8d8-464f-b47f-56aae015a09f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:21:39.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-567" for this suite.

• [SLOW TEST:6.806 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4219,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:21:39.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 23:21:42.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303301, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303301, loc:(*time.Location)(0x726af60)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303302, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303302, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:21:44.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303302, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303302, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303302, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303301, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:21:46.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303302, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303302, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303302, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303301, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 23:21:49.404: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:21:49.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-16-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:21:50.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7222" for this suite.
STEP: Destroying namespace "webhook-7222-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.456 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":257,"skipped":4223,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:21:50.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-5635
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5635
STEP: creating replication controller externalsvc in namespace services-5635
I0817 23:21:51.367004       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5635, replica count: 2
I0817 23:21:54.418054       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 23:21:57.418666       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0817 23:22:00.419324       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 17 23:22:00.477: INFO: Creating new exec pod
Aug 17 23:22:06.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5635 execpod5bdgn -- /bin/sh -x -c nslookup nodeport-service'
Aug 17 23:22:11.281: INFO: stderr: "I0817 23:22:11.160876    5536 log.go:172] (0x40003ca210) (0x4000583400) Create stream\nI0817 23:22:11.164844    5536 log.go:172] (0x40003ca210) (0x4000583400) Stream added, broadcasting: 1\nI0817 23:22:11.179728    5536 log.go:172] (0x40003ca210) Reply frame received for 1\nI0817 23:22:11.181247    5536 log.go:172] (0x40003ca210) (0x40005c4000) Create stream\nI0817 23:22:11.181365    5536 log.go:172] (0x40003ca210) (0x40005c4000) Stream added, broadcasting: 3\nI0817 23:22:11.183705    5536 log.go:172] (0x40003ca210) Reply frame received for 3\nI0817 23:22:11.184279    5536 log.go:172] (0x40003ca210) (0x4000620000) Create stream\nI0817 23:22:11.184449    5536 log.go:172] (0x40003ca210) (0x4000620000) Stream added, broadcasting: 5\nI0817 23:22:11.186544    5536 log.go:172] (0x40003ca210) Reply frame received for 5\nI0817 23:22:11.249065    5536 log.go:172] (0x40003ca210) Data frame received for 5\nI0817 23:22:11.249501    5536 log.go:172] (0x4000620000) (5) Data frame handling\n+ nslookup nodeport-service\nI0817 23:22:11.250722    5536 log.go:172] (0x4000620000) (5) Data frame sent\nI0817 23:22:11.255143    5536 log.go:172] (0x40003ca210) Data frame received for 3\nI0817 23:22:11.255231    5536 log.go:172] (0x40005c4000) (3) Data frame handling\nI0817 23:22:11.255303    5536 log.go:172] (0x40005c4000) (3) Data frame sent\nI0817 23:22:11.256176    5536 log.go:172] (0x40003ca210) Data frame received for 3\nI0817 23:22:11.256274    5536 log.go:172] (0x40005c4000) (3) Data frame handling\nI0817 23:22:11.256386    5536 log.go:172] (0x40005c4000) (3) Data frame sent\nI0817 23:22:11.256846    5536 log.go:172] (0x40003ca210) Data frame received for 5\nI0817 23:22:11.256911    5536 log.go:172] (0x4000620000) (5) Data frame handling\nI0817 23:22:11.257077    5536 log.go:172] (0x40003ca210) Data frame received for 3\nI0817 23:22:11.257233    5536 log.go:172] (0x40005c4000) (3) Data frame handling\nI0817 23:22:11.258561    5536 log.go:172] (0x40003ca210) Data frame received for 1\nI0817 23:22:11.258696    5536 log.go:172] (0x4000583400) (1) Data frame handling\nI0817 23:22:11.258829    5536 log.go:172] (0x4000583400) (1) Data frame sent\nI0817 23:22:11.259517    5536 log.go:172] (0x40003ca210) (0x4000583400) Stream removed, broadcasting: 1\nI0817 23:22:11.262966    5536 log.go:172] (0x40003ca210) Go away received\nI0817 23:22:11.266888    5536 log.go:172] (0x40003ca210) (0x4000583400) Stream removed, broadcasting: 1\nI0817 23:22:11.267394    5536 log.go:172] (0x40003ca210) (0x40005c4000) Stream removed, broadcasting: 3\nI0817 23:22:11.267741    5536 log.go:172] (0x40003ca210) (0x4000620000) Stream removed, broadcasting: 5\n"
Aug 17 23:22:11.282: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5635.svc.cluster.local\tcanonical name = externalsvc.services-5635.svc.cluster.local.\nName:\texternalsvc.services-5635.svc.cluster.local\nAddress: 10.100.187.251\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5635, will wait for the garbage collector to delete the pods
Aug 17 23:22:11.345: INFO: Deleting ReplicationController externalsvc took: 7.764785ms
Aug 17 23:22:11.446: INFO: Terminating ReplicationController externalsvc pods took: 100.71963ms
Aug 17 23:22:22.387: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:22:22.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5635" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:31.792 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":258,"skipped":4275,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:22:22.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-9343f026-5e52-4490-be77-889d3acb4a6b
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:22:28.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7009" for this suite.

• [SLOW TEST:6.324 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4288,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:22:28.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:22:45.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-918" for this suite.

• [SLOW TEST:16.234 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":260,"skipped":4315,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:22:45.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 17 23:22:48.367: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 17 23:22:50.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303368, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303368, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303368, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303368, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 17 23:22:53.583: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:22:55.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8495" for this suite.
STEP: Destroying namespace "webhook-8495-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.831 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":261,"skipped":4344,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:22:55.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 23:22:56.874: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9828004a-8061-4c13-aa86-e9d2c841dcce" in namespace "projected-7821" to be "success or failure"
Aug 17 23:22:56.931: INFO: Pod "downwardapi-volume-9828004a-8061-4c13-aa86-e9d2c841dcce": Phase="Pending", Reason="", readiness=false. Elapsed: 57.04691ms
Aug 17 23:22:58.980: INFO: Pod "downwardapi-volume-9828004a-8061-4c13-aa86-e9d2c841dcce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1058478s
Aug 17 23:23:01.080: INFO: Pod "downwardapi-volume-9828004a-8061-4c13-aa86-e9d2c841dcce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205917601s
Aug 17 23:23:03.087: INFO: Pod "downwardapi-volume-9828004a-8061-4c13-aa86-e9d2c841dcce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.213096293s
STEP: Saw pod success
Aug 17 23:23:03.087: INFO: Pod "downwardapi-volume-9828004a-8061-4c13-aa86-e9d2c841dcce" satisfied condition "success or failure"
Aug 17 23:23:03.093: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9828004a-8061-4c13-aa86-e9d2c841dcce container client-container: 
STEP: delete the pod
Aug 17 23:23:03.127: INFO: Waiting for pod downwardapi-volume-9828004a-8061-4c13-aa86-e9d2c841dcce to disappear
Aug 17 23:23:03.175: INFO: Pod downwardapi-volume-9828004a-8061-4c13-aa86-e9d2c841dcce no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:23:03.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7821" for this suite.

• [SLOW TEST:7.237 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4345,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:23:03.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:23:03.492: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 17 23:23:08.499: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 17 23:23:08.499: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 17 23:23:10.505: INFO: Creating deployment "test-rollover-deployment"
Aug 17 23:23:10.543: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 17 23:23:12.664: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 17 23:23:12.822: INFO: Ensure that both replica sets have 1 created replica
Aug 17 23:23:13.001: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 17 23:23:13.014: INFO: Updating deployment test-rollover-deployment
Aug 17 23:23:13.014: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 17 23:23:15.128: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 17 23:23:15.140: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 17 23:23:15.152: INFO: all replica sets need to contain the pod-template-hash label
Aug 17 23:23:15.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303394, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:23:17.169: INFO: all replica sets need to contain the pod-template-hash label
Aug 17 23:23:17.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303394, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:23:19.500: INFO: all replica sets need to contain the pod-template-hash label
Aug 17 23:23:19.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303397, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:23:21.184: INFO: all replica sets need to contain the pod-template-hash label
Aug 17 23:23:21.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303397, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:23:23.279: INFO: all replica sets need to contain the pod-template-hash label
Aug 17 23:23:23.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303397, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:23:25.169: INFO: all replica sets need to contain the pod-template-hash label
Aug 17 23:23:25.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303397, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:23:27.166: INFO: all replica sets need to contain the pod-template-hash label
Aug 17 23:23:27.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303397, loc:(*time.Location)(0x726af60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733303390, loc:(*time.Location)(0x726af60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 17 23:23:29.168: INFO: 
Aug 17 23:23:29.169: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 17 23:23:29.184: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-4147 /apis/apps/v1/namespaces/deployment-4147/deployments/test-rollover-deployment adab69dc-47f8-494d-923c-e22d92582c61 902197 2 2020-08-17 23:23:10 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400329e498  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-17 23:23:10 +0000 UTC,LastTransitionTime:2020-08-17 23:23:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-17 23:23:27 +0000 UTC,LastTransitionTime:2020-08-17 23:23:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 17 23:23:29.190: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-4147 /apis/apps/v1/namespaces/deployment-4147/replicasets/test-rollover-deployment-574d6dfbff c29e720b-6844-42cd-a004-f805c056b5bc 902185 2 2020-08-17 23:23:13 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment adab69dc-47f8-494d-923c-e22d92582c61 0x400329ebe7 0x400329ebe8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400329eca8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 17 23:23:29.190: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 17 23:23:29.190: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-4147 /apis/apps/v1/namespaces/deployment-4147/replicasets/test-rollover-controller 25824722-3970-4265-9dce-836bcb9e839d 902195 2 2020-08-17 23:23:03 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment adab69dc-47f8-494d-923c-e22d92582c61 0x400329ea47 0x400329ea48}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x400329eb08  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 17 23:23:29.191: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-4147 /apis/apps/v1/namespaces/deployment-4147/replicasets/test-rollover-deployment-f6c94f66c 107ebe5f-0f3c-4e27-b9be-a3de7bc2eb21 902140 2 2020-08-17 23:23:10 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment adab69dc-47f8-494d-923c-e22d92582c61 0x400329eda0 0x400329eda1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400329ee18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 17 23:23:29.197: INFO: Pod "test-rollover-deployment-574d6dfbff-k9tqp" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-k9tqp test-rollover-deployment-574d6dfbff- deployment-4147 /api/v1/namespaces/deployment-4147/pods/test-rollover-deployment-574d6dfbff-k9tqp e4f1d1e6-9767-4db6-8d3b-c4ae29271f86 902155 0 2020-08-17 23:23:13 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff c29e720b-6844-42cd-a004-f805c056b5bc 0x400329f527 0x400329f528}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v8pd9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v8pd9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v8pd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:23:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:23:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:23:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.147,StartTime:2020-08-17 23:23:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:23:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://bbb3eb6c23c12f3f71296a6f7ec0a3c32f9f36137c83ae24e86e2f9aabc93abd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.147,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:23:29.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4147" for this suite.

• [SLOW TEST:25.981 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":263,"skipped":4389,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:23:29.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:23:29.425: INFO: Waiting up to 5m0s for pod "busybox-user-65534-f1ce313a-ba21-48f1-a9c6-ffc28c883f00" in namespace "security-context-test-6480" to be "success or failure"
Aug 17 23:23:29.436: INFO: Pod "busybox-user-65534-f1ce313a-ba21-48f1-a9c6-ffc28c883f00": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25163ms
Aug 17 23:23:31.520: INFO: Pod "busybox-user-65534-f1ce313a-ba21-48f1-a9c6-ffc28c883f00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094671429s
Aug 17 23:23:33.527: INFO: Pod "busybox-user-65534-f1ce313a-ba21-48f1-a9c6-ffc28c883f00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101244403s
Aug 17 23:23:35.535: INFO: Pod "busybox-user-65534-f1ce313a-ba21-48f1-a9c6-ffc28c883f00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108893246s
Aug 17 23:23:35.535: INFO: Pod "busybox-user-65534-f1ce313a-ba21-48f1-a9c6-ffc28c883f00" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:23:35.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6480" for this suite.

• [SLOW TEST:6.337 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4401,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:23:35.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 17 23:23:35.688: INFO: Waiting up to 5m0s for pod "downward-api-4a67a570-5354-4e36-a288-6e400fbb0bf6" in namespace "downward-api-2496" to be "success or failure"
Aug 17 23:23:35.699: INFO: Pod "downward-api-4a67a570-5354-4e36-a288-6e400fbb0bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306527ms
Aug 17 23:23:37.707: INFO: Pod "downward-api-4a67a570-5354-4e36-a288-6e400fbb0bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01834373s
Aug 17 23:23:39.716: INFO: Pod "downward-api-4a67a570-5354-4e36-a288-6e400fbb0bf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027427051s
STEP: Saw pod success
Aug 17 23:23:39.716: INFO: Pod "downward-api-4a67a570-5354-4e36-a288-6e400fbb0bf6" satisfied condition "success or failure"
Aug 17 23:23:39.720: INFO: Trying to get logs from node jerma-worker2 pod downward-api-4a67a570-5354-4e36-a288-6e400fbb0bf6 container dapi-container: 
STEP: delete the pod
Aug 17 23:23:39.799: INFO: Waiting for pod downward-api-4a67a570-5354-4e36-a288-6e400fbb0bf6 to disappear
Aug 17 23:23:39.812: INFO: Pod downward-api-4a67a570-5354-4e36-a288-6e400fbb0bf6 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:23:39.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2496" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4411,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:23:39.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-rcnh
STEP: Creating a pod to test atomic-volume-subpath
Aug 17 23:23:39.953: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rcnh" in namespace "subpath-2016" to be "success or failure"
Aug 17 23:23:39.968: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.87701ms
Aug 17 23:23:41.975: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021686818s
Aug 17 23:23:44.000: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 4.04662184s
Aug 17 23:23:46.006: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 6.053158568s
Aug 17 23:23:48.012: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 8.058850179s
Aug 17 23:23:50.018: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 10.064649864s
Aug 17 23:23:52.024: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 12.070650897s
Aug 17 23:23:54.229: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 14.276320877s
Aug 17 23:23:56.239: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 16.28619008s
Aug 17 23:23:58.246: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 18.293262849s
Aug 17 23:24:00.253: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 20.299850557s
Aug 17 23:24:02.260: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Running", Reason="", readiness=true. Elapsed: 22.306957039s
Aug 17 23:24:04.267: INFO: Pod "pod-subpath-test-projected-rcnh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.313944579s
STEP: Saw pod success
Aug 17 23:24:04.267: INFO: Pod "pod-subpath-test-projected-rcnh" satisfied condition "success or failure"
Aug 17 23:24:04.272: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-rcnh container test-container-subpath-projected-rcnh: 
STEP: delete the pod
Aug 17 23:24:04.348: INFO: Waiting for pod pod-subpath-test-projected-rcnh to disappear
Aug 17 23:24:04.402: INFO: Pod pod-subpath-test-projected-rcnh no longer exists
STEP: Deleting pod pod-subpath-test-projected-rcnh
Aug 17 23:24:04.402: INFO: Deleting pod "pod-subpath-test-projected-rcnh" in namespace "subpath-2016"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:24:04.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2016" for this suite.

• [SLOW TEST:24.589 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":266,"skipped":4413,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:24:04.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-ee3bda3c-1127-494a-aa6c-92cc429a7d26
STEP: Creating a pod to test consume secrets
Aug 17 23:24:04.547: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1419048-834a-4018-a711-07e940bd004a" in namespace "projected-8624" to be "success or failure"
Aug 17 23:24:04.569: INFO: Pod "pod-projected-secrets-e1419048-834a-4018-a711-07e940bd004a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.244394ms
Aug 17 23:24:06.577: INFO: Pod "pod-projected-secrets-e1419048-834a-4018-a711-07e940bd004a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029421435s
Aug 17 23:24:08.606: INFO: Pod "pod-projected-secrets-e1419048-834a-4018-a711-07e940bd004a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058739477s
Aug 17 23:24:10.613: INFO: Pod "pod-projected-secrets-e1419048-834a-4018-a711-07e940bd004a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065234078s
STEP: Saw pod success
Aug 17 23:24:10.613: INFO: Pod "pod-projected-secrets-e1419048-834a-4018-a711-07e940bd004a" satisfied condition "success or failure"
Aug 17 23:24:10.617: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e1419048-834a-4018-a711-07e940bd004a container projected-secret-volume-test: 
STEP: delete the pod
Aug 17 23:24:10.659: INFO: Waiting for pod pod-projected-secrets-e1419048-834a-4018-a711-07e940bd004a to disappear
Aug 17 23:24:10.669: INFO: Pod pod-projected-secrets-e1419048-834a-4018-a711-07e940bd004a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:24:10.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8624" for this suite.

• [SLOW TEST:6.277 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4417,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:24:10.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-8b2f3949-be2c-47d6-a950-1dbc82386bca
STEP: Creating secret with name s-test-opt-upd-1c48878a-15c6-4690-aac8-90721fba2970
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-8b2f3949-be2c-47d6-a950-1dbc82386bca
STEP: Updating secret s-test-opt-upd-1c48878a-15c6-4690-aac8-90721fba2970
STEP: Creating secret with name s-test-opt-create-ad7456aa-c3eb-4de6-a798-b6f031b9e8f8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:24:18.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8160" for this suite.

• [SLOW TEST:8.298 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4426,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:24:18.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:24:19.106: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 17 23:24:24.115: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 17 23:24:24.116: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 17 23:24:30.271: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-9144 /apis/apps/v1/namespaces/deployment-9144/deployments/test-cleanup-deployment 659c652b-6b91-46bb-aca2-9557e9d1daad 902597 1 2020-08-17 23:24:24 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40068ffb88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-17 23:24:24 +0000 UTC,LastTransitionTime:2020-08-17 23:24:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-08-17 23:24:28 +0000 UTC,LastTransitionTime:2020-08-17 23:24:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 17 23:24:30.277: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-9144 /apis/apps/v1/namespaces/deployment-9144/replicasets/test-cleanup-deployment-55ffc6b7b6 12cdbac1-7cd1-43cb-b1db-7a8edb1b2382 902584 1 2020-08-17 23:24:24 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 659c652b-6b91-46bb-aca2-9557e9d1daad 0x400592d607 0x400592d608}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400592d678  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 17 23:24:30.283: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-z4nqd" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-z4nqd test-cleanup-deployment-55ffc6b7b6- deployment-9144 /api/v1/namespaces/deployment-9144/pods/test-cleanup-deployment-55ffc6b7b6-z4nqd a083b4a5-2064-4a54-83e6-d30298afe830 902583 0 2020-08-17 23:24:24 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 12cdbac1-7cd1-43cb-b1db-7a8edb1b2382 0x400592d9e7 0x400592d9e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2h5lw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2h5lw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2h5lw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:24:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:24:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:24:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:24:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.149,StartTime:2020-08-17 23:24:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:24:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1050c547f0cba98f0760963348e4d021fe651f990c5e8883325a94e567b90c38,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:24:30.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9144" for this suite.

• [SLOW TEST:11.298 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":269,"skipped":4440,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:24:30.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-1d83d8ee-5d56-4679-9aef-3b6bfd43988c
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-1d83d8ee-5d56-4679-9aef-3b6bfd43988c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:24:36.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6243" for this suite.

• [SLOW TEST:6.237 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4441,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:24:36.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:24:36.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 17 23:24:38.298: INFO: stderr: ""
Aug 17 23:24:38.298: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:24:38.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2710" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":271,"skipped":4459,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:24:38.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 17 23:24:38.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dacf3ad2-e224-4cd1-be37-fe0d947d0b8a" in namespace "downward-api-1659" to be "success or failure"
Aug 17 23:24:38.827: INFO: Pod "downwardapi-volume-dacf3ad2-e224-4cd1-be37-fe0d947d0b8a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.989285ms
Aug 17 23:24:40.833: INFO: Pod "downwardapi-volume-dacf3ad2-e224-4cd1-be37-fe0d947d0b8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019504543s
Aug 17 23:24:42.986: INFO: Pod "downwardapi-volume-dacf3ad2-e224-4cd1-be37-fe0d947d0b8a": Phase="Running", Reason="", readiness=true. Elapsed: 4.171994177s
Aug 17 23:24:45.212: INFO: Pod "downwardapi-volume-dacf3ad2-e224-4cd1-be37-fe0d947d0b8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.398930327s
STEP: Saw pod success
Aug 17 23:24:45.213: INFO: Pod "downwardapi-volume-dacf3ad2-e224-4cd1-be37-fe0d947d0b8a" satisfied condition "success or failure"
Aug 17 23:24:45.219: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-dacf3ad2-e224-4cd1-be37-fe0d947d0b8a container client-container: 
STEP: delete the pod
Aug 17 23:24:45.266: INFO: Waiting for pod downwardapi-volume-dacf3ad2-e224-4cd1-be37-fe0d947d0b8a to disappear
Aug 17 23:24:45.282: INFO: Pod downwardapi-volume-dacf3ad2-e224-4cd1-be37-fe0d947d0b8a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:24:45.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1659" for this suite.

• [SLOW TEST:6.820 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4467,"failed":0}
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:24:45.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:24:45.465: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-411fc0d8-989c-4adf-87cc-79c15728c7df
STEP: Creating a pod to test consume configMaps
Aug 17 23:24:45.737: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf806fe8-4c64-470e-be66-9f22ad9f7979" in namespace "projected-5700" to be "success or failure"
Aug 17 23:24:45.743: INFO: Pod "pod-projected-configmaps-bf806fe8-4c64-470e-be66-9f22ad9f7979": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014066ms
Aug 17 23:24:48.281: INFO: Pod "pod-projected-configmaps-bf806fe8-4c64-470e-be66-9f22ad9f7979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543413908s
Aug 17 23:24:50.440: INFO: Pod "pod-projected-configmaps-bf806fe8-4c64-470e-be66-9f22ad9f7979": Phase="Pending", Reason="", readiness=false. Elapsed: 4.702760185s
Aug 17 23:24:52.461: INFO: Pod "pod-projected-configmaps-bf806fe8-4c64-470e-be66-9f22ad9f7979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.723668149s
STEP: Saw pod success
Aug 17 23:24:52.463: INFO: Pod "pod-projected-configmaps-bf806fe8-4c64-470e-be66-9f22ad9f7979" satisfied condition "success or failure"
Aug 17 23:24:52.480: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-bf806fe8-4c64-470e-be66-9f22ad9f7979 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 17 23:24:52.511: INFO: Waiting for pod pod-projected-configmaps-bf806fe8-4c64-470e-be66-9f22ad9f7979 to disappear
Aug 17 23:24:52.521: INFO: Pod pod-projected-configmaps-bf806fe8-4c64-470e-be66-9f22ad9f7979 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:24:52.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5700" for this suite.

• [SLOW TEST:6.939 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4473,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:24:52.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-c2170935-1909-4336-ba0f-05196c292a50
STEP: Creating a pod to test consume configMaps
Aug 17 23:24:52.655: INFO: Waiting up to 5m0s for pod "pod-configmaps-535fb713-71da-423e-98cf-4fab656fd9f7" in namespace "configmap-8189" to be "success or failure"
Aug 17 23:24:52.739: INFO: Pod "pod-configmaps-535fb713-71da-423e-98cf-4fab656fd9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 83.991174ms
Aug 17 23:24:54.746: INFO: Pod "pod-configmaps-535fb713-71da-423e-98cf-4fab656fd9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090938718s
Aug 17 23:24:56.751: INFO: Pod "pod-configmaps-535fb713-71da-423e-98cf-4fab656fd9f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096413428s
STEP: Saw pod success
Aug 17 23:24:56.751: INFO: Pod "pod-configmaps-535fb713-71da-423e-98cf-4fab656fd9f7" satisfied condition "success or failure"
Aug 17 23:24:56.755: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-535fb713-71da-423e-98cf-4fab656fd9f7 container configmap-volume-test: 
STEP: delete the pod
Aug 17 23:24:56.814: INFO: Waiting for pod pod-configmaps-535fb713-71da-423e-98cf-4fab656fd9f7 to disappear
Aug 17 23:24:56.832: INFO: Pod pod-configmaps-535fb713-71da-423e-98cf-4fab656fd9f7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:24:56.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8189" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4475,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:24:56.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Aug 17 23:24:57.036: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4515" to be "success or failure"
Aug 17 23:24:57.071: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 34.4946ms
Aug 17 23:24:59.078: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041159388s
Aug 17 23:25:01.083: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046681835s
Aug 17 23:25:03.090: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053531766s
STEP: Saw pod success
Aug 17 23:25:03.090: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 17 23:25:03.094: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 17 23:25:03.116: INFO: Waiting for pod pod-host-path-test to disappear
Aug 17 23:25:03.125: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:25:03.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4515" for this suite.

• [SLOW TEST:6.284 seconds]
[sig-storage] HostPath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4537,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:25:03.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 17 23:25:03.270: INFO: Creating deployment "webserver-deployment"
Aug 17 23:25:03.276: INFO: Waiting for observed generation 1
Aug 17 23:25:05.379: INFO: Waiting for all required pods to come up
Aug 17 23:25:05.409: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 17 23:25:19.428: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 17 23:25:19.439: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 17 23:25:19.448: INFO: Updating deployment webserver-deployment
Aug 17 23:25:19.448: INFO: Waiting for observed generation 2
Aug 17 23:25:21.469: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 17 23:25:21.632: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 17 23:25:21.637: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 17 23:25:21.652: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 17 23:25:21.653: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 17 23:25:21.657: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 17 23:25:21.665: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 17 23:25:21.665: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 17 23:25:21.674: INFO: Updating deployment webserver-deployment
Aug 17 23:25:21.674: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 17 23:25:21.895: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 17 23:25:24.898: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 17 23:25:25.992: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-455 /apis/apps/v1/namespaces/deployment-455/deployments/webserver-deployment 8fbd4911-2909-4aef-b936-cd2fd37776ce 903162 3 2020-08-17 23:25:03 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40039a18e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-17 23:25:21 +0000 UTC,LastTransitionTime:2020-08-17 23:25:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-17 23:25:22 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 17 23:25:26.151: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-455 /apis/apps/v1/namespaces/deployment-455/replicasets/webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 903156 3 2020-08-17 23:25:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8fbd4911-2909-4aef-b936-cd2fd37776ce 0x40039a1db7 0x40039a1db8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40039a1e28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 17 23:25:26.151: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 17 23:25:26.152: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-455 /apis/apps/v1/namespaces/deployment-455/replicasets/webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 903152 3 2020-08-17 23:25:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8fbd4911-2909-4aef-b936-cd2fd37776ce 0x40039a1cf7 0x40039a1cf8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40039a1d58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 17 23:25:26.621: INFO: Pod "webserver-deployment-595b5b9587-44x4c" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-44x4c webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-44x4c 315d0d30-8157-4dc3-82d6-7d37841d1cc4 903171 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x400329f6c7 0x400329f6c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.622: INFO: Pod "webserver-deployment-595b5b9587-6n8zq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6n8zq webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-6n8zq c627042c-344c-471b-8317-5ee2d3f1e23c 903200 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x400329f990 0x400329f991}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.624: INFO: Pod "webserver-deployment-595b5b9587-6wprj" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6wprj webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-6wprj e6cf7447-7bb8-4e26-9657-9eb30b5f5124 903028 0 2020-08-17 23:25:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x400329fb70 0x400329fb71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.210,StartTime:2020-08-17 23:25:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:25:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1237fd2901c8b6b4cb76dda2ee60f78d3c5960a42aebb3a0586286fa5ca8b6cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.625: INFO: Pod "webserver-deployment-595b5b9587-6zhzh" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6zhzh webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-6zhzh 98d3550c-2b16-4c0e-9184-da481535e0c7 903018 0 2020-08-17 23:25:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4090 0x40039e4091}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.156,StartTime:2020-08-17 23:25:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:25:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a813bc81307081c32076a672a1e9d35593459059103b0084dc9575e7b8c1fef6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.626: INFO: Pod "webserver-deployment-595b5b9587-87fsf" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-87fsf webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-87fsf b64d93ba-bc0a-48f6-bfae-4829a02bb0bc 902997 0 2020-08-17 23:25:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4200 0x40039e4201}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.153,StartTime:2020-08-17 23:25:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:25:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9b71a80812d6464ad3dc0df271f88374861d037e4fec27b7aae1ece2b691e6f4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.153,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.627: INFO: Pod "webserver-deployment-595b5b9587-8dsnb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8dsnb webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-8dsnb 344a039d-c169-4ca9-b3b8-bedbe1096b93 902974 0 2020-08-17 23:25:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4370 0x40039e4371}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.152,StartTime:2020-08-17 23:25:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:25:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b2da3c52eaabe15e504c7907beac9adafcb5e6e7a8fd2a7d3f03fcaca95f11fa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.628: INFO: Pod "webserver-deployment-595b5b9587-bw9bg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bw9bg webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-bw9bg bca0a040-29c6-49d2-8ce1-23eb26cdfe29 903205 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e44e0 0x40039e44e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.629: INFO: Pod "webserver-deployment-595b5b9587-hhfsd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hhfsd webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-hhfsd 1d6fa3a7-1277-4ef2-897d-7d5e6cbbdf59 903215 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4630 0x40039e4631}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.630: INFO: Pod "webserver-deployment-595b5b9587-jtjs7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jtjs7 webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-jtjs7 c40eb2e9-8f34-445f-958f-b46cee1171f1 902983 0 2020-08-17 23:25:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4780 0x40039e4781}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.206,StartTime:2020-08-17 23:25:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:25:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://555a187dc48beb88c7280306c389703e583dfc2e5b75e70c4aa7d1930b4abce8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.631: INFO: Pod "webserver-deployment-595b5b9587-kjj4c" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kjj4c webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-kjj4c 08d6db82-76c2-4c14-a73a-e1c4f41f38a2 903218 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4900 0x40039e4901}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.632: INFO: Pod "webserver-deployment-595b5b9587-m67bd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-m67bd webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-m67bd 6386394a-a532-4b41-8328-d19268b30586 903183 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4a50 0x40039e4a51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.633: INFO: Pod "webserver-deployment-595b5b9587-m9b4b" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-m9b4b webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-m9b4b d2e96ce6-9725-45de-94a8-f99a4b55b0e2 903170 0 2020-08-17 23:25:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4ba0 0x40039e4ba1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.634: INFO: Pod "webserver-deployment-595b5b9587-nsr5z" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nsr5z webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-nsr5z c358c211-d233-408a-9d60-6a87ac0ad234 902991 0 2020-08-17 23:25:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4cf0 0x40039e4cf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.208,StartTime:2020-08-17 23:25:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:25:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c72093a1a5a2fe6431d76eb530f284b4610b28513b7c253210d041bfb60f03f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.635: INFO: Pod "webserver-deployment-595b5b9587-p4tr8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-p4tr8 webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-p4tr8 a067c250-ae97-444c-8f5d-f757e2b3b8eb 903153 0 2020-08-17 23:25:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4e60 0x40039e4e61}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.636: INFO: Pod "webserver-deployment-595b5b9587-r6q58" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-r6q58 webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-r6q58 07168353-5072-4aa6-a0b2-a336e9769824 903007 0 2020-08-17 23:25:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e4fb0 0x40039e4fb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.209,StartTime:2020-08-17 23:25:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:25:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://afa6c208a43bb96e78957d2c763bddb1f7afd9d0aa055e24bf80b1e47f84c838,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.637: INFO: Pod "webserver-deployment-595b5b9587-vgw5r" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vgw5r webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-vgw5r 3d96d25e-cdc1-477c-82e6-b9672db17183 902973 0 2020-08-17 23:25:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e5120 0x40039e5121}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.207,StartTime:2020-08-17 23:25:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-17 23:25:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3789caa3742893598a5d5a9c925cd3a463a4050331ad06db9e5e10e692fd35b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.638: INFO: Pod "webserver-deployment-595b5b9587-wbkcl" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wbkcl webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-wbkcl 62c45ca5-3488-401e-8aeb-e1c674dfb02d 903158 0 2020-08-17 23:25:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e5290 0x40039e5291}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.639: INFO: Pod "webserver-deployment-595b5b9587-wm2bz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wm2bz webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-wm2bz 507edc33-d3c4-455b-8993-c40c817adf73 903176 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e53e0 0x40039e53e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.640: INFO: Pod "webserver-deployment-595b5b9587-wrggx" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wrggx webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-wrggx 2f4511fe-98f6-4537-9cb8-09302337e9b2 903175 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e5530 0x40039e5531}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.641: INFO: Pod "webserver-deployment-595b5b9587-zs77b" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zs77b webserver-deployment-595b5b9587- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-595b5b9587-zs77b dfc2984d-6c9d-46a1-814b-c6befeaed69a 903219 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2fc677f5-86d3-45fc-8f8e-52af7430bf83 0x40039e5680 0x40039e5681}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.642: INFO: Pod "webserver-deployment-c7997dcc8-27t7k" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-27t7k webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-27t7k 78e99e63-cdb5-4f04-9e36-de9387dc766e 903234 0 2020-08-17 23:25:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x40039e57d0 0x40039e57d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.211,StartTime:2020-08-17 23:25:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.211,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.644: INFO: Pod "webserver-deployment-c7997dcc8-6tkgx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6tkgx webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-6tkgx bebdbbac-a6fa-496c-8493-0296ae794247 903214 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x40039e5970 0x40039e5971}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.645: INFO: Pod "webserver-deployment-c7997dcc8-9lghx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9lghx webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-9lghx 5a3dc7a5-0140-4e5f-b490-57613d48f5f0 903192 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x40039e5ae0 0x40039e5ae1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.646: INFO: Pod "webserver-deployment-c7997dcc8-br87t" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-br87t webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-br87t 6e18704d-675f-42ee-a0ba-9d93176cfe7a 903163 0 2020-08-17 23:25:21 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x40039e5c50 0x40039e5c51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.647: INFO: Pod "webserver-deployment-c7997dcc8-cq44n" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cq44n webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-cq44n 57de0f01-bb20-4c44-9f90-a551b9b04c44 903181 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x40039e5dc0 0x40039e5dc1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.648: INFO: Pod "webserver-deployment-c7997dcc8-fmrd6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fmrd6 webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-fmrd6 516efc76-2b9a-4f19-9f1d-053b4915aed5 903236 0 2020-08-17 23:25:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x40039e5f30 0x40039e5f31}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.157,StartTime:2020-08-17 23:25:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.649: INFO: Pod "webserver-deployment-c7997dcc8-fslnb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fslnb webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-fslnb 073cd465-095b-4dd1-9c0e-8c5cdcd04ea8 903072 0 2020-08-17 23:25:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x4005a860d0 0x4005a860d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.650: INFO: Pod "webserver-deployment-c7997dcc8-hcgjh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hcgjh webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-hcgjh e6cae7f1-5e68-4e84-8126-87577b198ee7 903227 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x4005a86240 0x4005a86241}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.651: INFO: Pod "webserver-deployment-c7997dcc8-mfhps" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mfhps webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-mfhps 1cf382de-9358-4728-854e-e2e09b09760c 903087 0 2020-08-17 23:25:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x4005a863b0 0x4005a863b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.652: INFO: Pod "webserver-deployment-c7997dcc8-mr966" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mr966 webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-mr966 8bdeeb96-0942-4f70-8d58-9c2a36bcd181 903241 0 2020-08-17 23:25:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x4005a86520 0x4005a86521}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.158,StartTime:2020-08-17 23:25:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.653: INFO: Pod "webserver-deployment-c7997dcc8-rgmkl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rgmkl webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-rgmkl 966952d2-8b67-4558-a195-4874cca08fac 903191 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x4005a866c0 0x4005a866c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.654: INFO: Pod "webserver-deployment-c7997dcc8-rtcst" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rtcst webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-rtcst 5cbbeba7-8a11-4ceb-ba21-b53de930c15d 903201 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x4005a86840 0x4005a86841}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 17 23:25:26.655: INFO: Pod "webserver-deployment-c7997dcc8-t8rdv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t8rdv webserver-deployment-c7997dcc8- deployment-455 /api/v1/namespaces/deployment-455/pods/webserver-deployment-c7997dcc8-t8rdv ced2d001-ea04-4c96-84e1-d2e958f66e77 903225 0 2020-08-17 23:25:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 50e0fdae-3b9e-4894-85db-f55e00ed6fbe 0x4005a869b0 0x4005a869b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-txmlk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-txmlk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-txmlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-17 23:25:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-17 23:25:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:25:26.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-455" for this suite.

• [SLOW TEST:23.729 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":277,"skipped":4549,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 17 23:25:26.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-b406f438-6f74-43d5-a085-fb4ef4ff49ea
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 17 23:25:28.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7450" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":278,"skipped":4563,"failed":0}
SSSAug 17 23:25:28.544: INFO: Running AfterSuite actions on all nodes
Aug 17 23:25:28.545: INFO: Running AfterSuite actions on node 1
Aug 17 23:25:28.545: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 6445.501 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS