I0821 11:56:01.478515 10 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0821 11:56:01.484089 10 e2e.go:124] Starting e2e run "ef46a63a-f611-4a2c-8bf6-b2793b3b0eb3" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598010948 - Will randomize all specs Will run 275 of 4992 specs Aug 21 11:56:02.066: INFO: >>> kubeConfig: /root/.kube/config Aug 21 11:56:02.119: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 21 11:56:02.302: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 21 11:56:02.483: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 21 11:56:02.483: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 21 11:56:02.483: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 21 11:56:02.526: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 21 11:56:02.526: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 21 11:56:02.526: INFO: e2e test version: v1.18.8 Aug 21 11:56:02.532: INFO: kube-apiserver version: v1.18.8 Aug 21 11:56:02.536: INFO: >>> kubeConfig: /root/.kube/config Aug 21 11:56:02.558: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:56:02.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Aug 21 11:56:02.704: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 21 11:56:02.742: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2afb078c-ad6f-4675-9cbd-c5fa77e0db97" in namespace "projected-8497" to be "Succeeded or Failed" Aug 21 11:56:02.856: INFO: Pod "downwardapi-volume-2afb078c-ad6f-4675-9cbd-c5fa77e0db97": Phase="Pending", Reason="", readiness=false. Elapsed: 113.543986ms Aug 21 11:56:04.864: INFO: Pod "downwardapi-volume-2afb078c-ad6f-4675-9cbd-c5fa77e0db97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121880778s Aug 21 11:56:06.871: INFO: Pod "downwardapi-volume-2afb078c-ad6f-4675-9cbd-c5fa77e0db97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129149095s STEP: Saw pod success Aug 21 11:56:06.871: INFO: Pod "downwardapi-volume-2afb078c-ad6f-4675-9cbd-c5fa77e0db97" satisfied condition "Succeeded or Failed" Aug 21 11:56:06.876: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-2afb078c-ad6f-4675-9cbd-c5fa77e0db97 container client-container: STEP: delete the pod Aug 21 11:56:07.053: INFO: Waiting for pod downwardapi-volume-2afb078c-ad6f-4675-9cbd-c5fa77e0db97 to disappear Aug 21 11:56:07.122: INFO: Pod downwardapi-volume-2afb078c-ad6f-4675-9cbd-c5fa77e0db97 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:56:07.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8497" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:56:07.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 11:56:07.299: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 21 11:56:27.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 create -f -' Aug 21 11:56:33.918: INFO: stderr: "" Aug 21 11:56:33.918: INFO: stdout: "e2e-test-crd-publish-openapi-8861-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 21 11:56:33.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 delete e2e-test-crd-publish-openapi-8861-crds test-cr' Aug 21 11:56:35.184: INFO: stderr: "" Aug 21 11:56:35.184: INFO: stdout: "e2e-test-crd-publish-openapi-8861-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 21 11:56:35.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 apply -f -' Aug 21 11:56:36.883: INFO: stderr: "" Aug 21 11:56:36.883: INFO: stdout: "e2e-test-crd-publish-openapi-8861-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 21 11:56:36.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1079 delete e2e-test-crd-publish-openapi-8861-crds test-cr' Aug 21 11:56:38.140: INFO: stderr: "" Aug 21 11:56:38.141: INFO: stdout: "e2e-test-crd-publish-openapi-8861-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 21 11:56:38.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8861-crds' Aug 21 11:56:39.842: INFO: stderr: "" Aug 21 11:56:39.842: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8861-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:56:59.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1079" for this suite. • [SLOW TEST:52.973 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":2,"skipped":24,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:57:00.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Aug 21 11:57:01.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5446' Aug 21 11:57:03.320: INFO: stderr: "" Aug 21 11:57:03.320: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 21 11:57:04.481: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:04.483: INFO: Found 0 / 1 Aug 21 11:57:05.908: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:05.908: INFO: Found 0 / 1 Aug 21 11:57:06.432: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:06.432: INFO: Found 0 / 1 Aug 21 11:57:07.836: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:07.836: INFO: Found 0 / 1 Aug 21 11:57:08.381: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:08.381: INFO: Found 0 / 1 Aug 21 11:57:09.330: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:09.330: INFO: Found 0 / 1 Aug 21 11:57:10.355: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:10.355: INFO: Found 0 / 1 Aug 21 11:57:11.330: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:11.331: INFO: Found 1 / 1 Aug 21 11:57:11.331: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 21 11:57:11.339: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:11.339: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 21 11:57:11.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config patch pod agnhost-master-cdlt9 --namespace=kubectl-5446 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 21 11:57:12.665: INFO: stderr: "" Aug 21 11:57:12.665: INFO: stdout: "pod/agnhost-master-cdlt9 patched\n" STEP: checking annotations Aug 21 11:57:12.779: INFO: Selector matched 1 pods for map[app:agnhost] Aug 21 11:57:12.779: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:57:12.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5446" for this suite. • [SLOW TEST:12.777 seconds] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":3,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:57:12.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:57:29.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4727" for this suite. • [SLOW TEST:16.336 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":4,"skipped":66,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:57:29.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 11:57:29.463: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:57:30.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9503" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":5,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:57:30.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 21 11:57:30.705: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9414 /api/v1/namespaces/watch-9414/configmaps/e2e-watch-test-watch-closed cf96d15e-aa4a-4be1-ad81-db16be578141 2104355 0 2020-08-21 11:57:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-21 11:57:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 21 11:57:30.712: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9414 /api/v1/namespaces/watch-9414/configmaps/e2e-watch-test-watch-closed cf96d15e-aa4a-4be1-ad81-db16be578141 2104356 0 2020-08-21 11:57:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-21 11:57:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 21 11:57:30.755: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9414 /api/v1/namespaces/watch-9414/configmaps/e2e-watch-test-watch-closed cf96d15e-aa4a-4be1-ad81-db16be578141 2104358 0 2020-08-21 11:57:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-21 11:57:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 21 11:57:30.758: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9414 /api/v1/namespaces/watch-9414/configmaps/e2e-watch-test-watch-closed cf96d15e-aa4a-4be1-ad81-db16be578141 2104359 0 2020-08-21 11:57:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-21 11:57:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:57:30.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9414" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":6,"skipped":110,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:57:30.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-ef6d35b2-89cc-4acc-9e47-6d7346c23b77 STEP: Creating a pod to test consume secrets Aug 21 11:57:30.906: INFO: Waiting up to 5m0s for pod "pod-secrets-c22ba333-9db2-4b56-8bf6-9fad0c373bf3" in namespace "secrets-8254" to be "Succeeded or Failed" Aug 21 11:57:30.915: INFO: Pod "pod-secrets-c22ba333-9db2-4b56-8bf6-9fad0c373bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.305079ms Aug 21 11:57:32.923: INFO: Pod "pod-secrets-c22ba333-9db2-4b56-8bf6-9fad0c373bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016621832s Aug 21 11:57:34.964: INFO: Pod "pod-secrets-c22ba333-9db2-4b56-8bf6-9fad0c373bf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057790335s STEP: Saw pod success Aug 21 11:57:34.964: INFO: Pod "pod-secrets-c22ba333-9db2-4b56-8bf6-9fad0c373bf3" satisfied condition "Succeeded or Failed" Aug 21 11:57:34.970: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-c22ba333-9db2-4b56-8bf6-9fad0c373bf3 container secret-env-test: STEP: delete the pod Aug 21 11:57:35.036: INFO: Waiting for pod pod-secrets-c22ba333-9db2-4b56-8bf6-9fad0c373bf3 to disappear Aug 21 11:57:35.178: INFO: Pod pod-secrets-c22ba333-9db2-4b56-8bf6-9fad0c373bf3 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:57:35.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8254" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":121,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:57:35.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-ec174285-4ede-445e-aa06-b2eb4eb50408 STEP: Creating a pod to test consume configMaps Aug 21 11:57:35.639: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed4736dc-71ab-42a8-ab29-f6357934a269" in namespace "configmap-4584" to be "Succeeded or Failed" Aug 21 11:57:35.682: INFO: Pod "pod-configmaps-ed4736dc-71ab-42a8-ab29-f6357934a269": Phase="Pending", Reason="", readiness=false. Elapsed: 42.77354ms Aug 21 11:57:37.802: INFO: Pod "pod-configmaps-ed4736dc-71ab-42a8-ab29-f6357934a269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162348953s Aug 21 11:57:39.809: INFO: Pod "pod-configmaps-ed4736dc-71ab-42a8-ab29-f6357934a269": Phase="Running", Reason="", readiness=true. Elapsed: 4.169789346s Aug 21 11:57:41.869: INFO: Pod "pod-configmaps-ed4736dc-71ab-42a8-ab29-f6357934a269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.228935672s STEP: Saw pod success Aug 21 11:57:41.869: INFO: Pod "pod-configmaps-ed4736dc-71ab-42a8-ab29-f6357934a269" satisfied condition "Succeeded or Failed" Aug 21 11:57:41.874: INFO: Trying to get logs from node kali-worker pod pod-configmaps-ed4736dc-71ab-42a8-ab29-f6357934a269 container configmap-volume-test: STEP: delete the pod Aug 21 11:57:41.911: INFO: Waiting for pod pod-configmaps-ed4736dc-71ab-42a8-ab29-f6357934a269 to disappear Aug 21 11:57:41.926: INFO: Pod pod-configmaps-ed4736dc-71ab-42a8-ab29-f6357934a269 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:57:41.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4584" for this suite. • [SLOW TEST:6.746 seconds] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":124,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:57:41.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-789 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 21 11:57:42.026: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 21 11:57:42.133: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 21 11:57:44.143: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 21 11:57:46.155: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 11:57:48.143: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 11:57:50.141: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 11:57:52.139: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 11:57:54.140: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 21 11:57:56.171: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 21 11:57:56.181: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 21 11:57:58.983: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 21 11:58:00.196: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 21 11:58:02.190: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 21 11:58:06.390: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.76:8080/dial?request=hostname&protocol=http&host=10.244.2.244&port=8080&tries=1'] Namespace:pod-network-test-789 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 11:58:06.391: INFO: >>> kubeConfig: /root/.kube/config I0821 11:58:06.558424 10 log.go:172] (0x40055189a0) (0x4002a3b4a0) Create stream I0821 11:58:06.558865 10 log.go:172] (0x40055189a0) (0x4002a3b4a0) Stream added, broadcasting: 1 I0821 11:58:06.575233 10 log.go:172] (0x40055189a0) Reply frame received for 1 I0821 11:58:06.575856 10 log.go:172] (0x40055189a0) (0x40028e0460) Create stream I0821 11:58:06.575926 10 log.go:172] (0x40055189a0) (0x40028e0460) Stream added, broadcasting: 3 I0821 11:58:06.577633 10 log.go:172] (0x40055189a0) Reply frame received for 3 I0821 11:58:06.578040 10 log.go:172] (0x40055189a0) (0x400247e0a0) Create stream I0821 11:58:06.578144 10 log.go:172] (0x40055189a0) (0x400247e0a0) Stream added, broadcasting: 5 I0821 11:58:06.579413 10 log.go:172] (0x40055189a0) Reply frame received for 5 I0821 11:58:06.643653 10 log.go:172] (0x40055189a0) Data frame received for 3 I0821 11:58:06.644008 10 log.go:172] (0x40055189a0) Data frame received for 5 I0821 11:58:06.644137 10 log.go:172] (0x400247e0a0) (5) Data frame handling I0821 11:58:06.644234 10 log.go:172] (0x40028e0460) (3) Data frame handling I0821 11:58:06.644718 10 log.go:172] (0x40055189a0) Data frame received for 1 I0821 11:58:06.644872 10 log.go:172] (0x4002a3b4a0) (1) Data frame handling I0821 11:58:06.646655 10 log.go:172] (0x4002a3b4a0) (1) Data frame sent I0821 11:58:06.646952 10 log.go:172] (0x40028e0460) (3) Data frame sent I0821 11:58:06.647021 10 log.go:172] (0x40055189a0) Data frame received for 3 I0821 11:58:06.647066 10 log.go:172] (0x40028e0460) (3) Data frame handling I0821 11:58:06.647611 10 log.go:172] (0x40055189a0) (0x4002a3b4a0) Stream removed, broadcasting: 1 I0821 11:58:06.650300 10 log.go:172] (0x40055189a0) Go away received I0821 11:58:06.652481 10 log.go:172] (0x40055189a0) (0x4002a3b4a0) Stream removed, broadcasting: 1 I0821 11:58:06.652959 10 log.go:172] (0x40055189a0) (0x40028e0460) Stream removed, broadcasting: 3 I0821 11:58:06.653231 10 log.go:172] (0x40055189a0) (0x400247e0a0) Stream removed, broadcasting: 5 Aug 21 11:58:06.654: INFO: Waiting for responses: map[] Aug 21 11:58:06.659: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.76:8080/dial?request=hostname&protocol=http&host=10.244.1.74&port=8080&tries=1'] Namespace:pod-network-test-789 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 11:58:06.659: INFO: >>> kubeConfig: /root/.kube/config I0821 11:58:06.716145 10 log.go:172] (0x4004a0f6b0) (0x400297d4a0) Create stream I0821 11:58:06.716265 10 log.go:172] (0x4004a0f6b0) (0x400297d4a0) Stream added, broadcasting: 1 I0821 11:58:06.719251 10 log.go:172] (0x4004a0f6b0) Reply frame received for 1 I0821 11:58:06.719478 10 log.go:172] (0x4004a0f6b0) (0x4002a3b540) Create stream I0821 11:58:06.719605 10 log.go:172] (0x4004a0f6b0) (0x4002a3b540) Stream added, broadcasting: 3 I0821 11:58:06.721266 10 log.go:172] (0x4004a0f6b0) Reply frame received for 3 I0821 11:58:06.721372 10 log.go:172] (0x4004a0f6b0) (0x400297d540) Create stream I0821 11:58:06.721451 10 log.go:172] (0x4004a0f6b0) (0x400297d540) Stream added, broadcasting: 5 I0821 11:58:06.722640 10 log.go:172] (0x4004a0f6b0) Reply frame received for 5 I0821 11:58:06.787036 10 log.go:172] (0x4004a0f6b0) Data frame received for 3 I0821 11:58:06.787218 10 log.go:172] (0x4002a3b540) (3) Data frame handling I0821 11:58:06.787339 10 log.go:172] (0x4002a3b540) (3) Data frame sent I0821 11:58:06.787435 10 log.go:172] (0x4004a0f6b0) Data frame received for 3 I0821 11:58:06.787531 10 log.go:172] (0x4004a0f6b0) Data frame received for 5 I0821 11:58:06.787637 10 log.go:172] (0x400297d540) (5) Data frame handling I0821 11:58:06.787785 10 log.go:172] (0x4002a3b540) (3) Data frame handling I0821 11:58:06.789483 10 log.go:172] (0x4004a0f6b0) Data frame received for 1 I0821 11:58:06.789562 10 log.go:172] (0x400297d4a0) (1) Data frame handling I0821 11:58:06.789633 10 log.go:172] (0x400297d4a0) (1) Data frame sent I0821 11:58:06.789703 10 log.go:172] (0x4004a0f6b0) (0x400297d4a0) Stream removed, broadcasting: 1 I0821 11:58:06.789807 10 log.go:172] (0x4004a0f6b0) Go away received I0821 11:58:06.790408 10 log.go:172] (0x4004a0f6b0) (0x400297d4a0) Stream removed, broadcasting: 1 I0821 11:58:06.790536 10 log.go:172] (0x4004a0f6b0) (0x4002a3b540) Stream removed, broadcasting: 3 I0821 11:58:06.790634 10 log.go:172] (0x4004a0f6b0) (0x400297d540) Stream removed, broadcasting: 5 Aug 21 11:58:06.790: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:58:06.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-789" for this suite. • [SLOW TEST:24.862 seconds] [sig-network] Networking /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":131,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:58:06.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 11:58:06.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config version' Aug 21 11:58:08.283: INFO: stderr: "" Aug 21 11:58:08.284: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.8\", GitCommit:\"9f2892aab98fe339f3bd70e3c470144299398ace\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T16:12:48Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.8\", GitCommit:\"9f2892aab98fe339f3bd70e3c470144299398ace\", GitTreeState:\"clean\", BuildDate:\"2020-08-14T21:13:38Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 11:58:08.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7707" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":10,"skipped":134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 11:58:08.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-7d018d9c-b579-4280-809b-bf217fa673f5 in namespace container-probe-749 Aug 21 11:58:13.046: INFO: Started pod liveness-7d018d9c-b579-4280-809b-bf217fa673f5 in namespace container-probe-749 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 11:58:13.290: INFO: Initial restart count of pod liveness-7d018d9c-b579-4280-809b-bf217fa673f5 is 0 Aug 21 11:58:25.366: INFO: Restart count of pod container-probe-749/liveness-7d018d9c-b579-4280-809b-bf217fa673f5 is now 1 (12.075418866s elapsed) Aug 21 11:58:47.621: INFO: Restart count of pod container-probe-749/liveness-7d018d9c-b579-4280-809b-bf217fa673f5 is now 2 (34.330088602s elapsed) Aug 21 11:59:08.139: INFO: Restart count of pod container-probe-749/liveness-7d018d9c-b579-4280-809b-bf217fa673f5 is now 3 (54.848293597s elapsed) Aug 21 11:59:26.194: INFO: Restart count of pod container-probe-749/liveness-7d018d9c-b579-4280-809b-bf217fa673f5 is now 4 (1m12.903036987s elapsed) Aug 21 12:00:28.858: INFO: Restart count of pod container-probe-749/liveness-7d018d9c-b579-4280-809b-bf217fa673f5 is now 5 (2m15.5673982s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:00:30.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-749" for this suite. • [SLOW TEST:141.901 seconds] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:00:30.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 21 12:00:33.169: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 21 12:00:35.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608033, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608033, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608033, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608033, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:00:37.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608033, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608033, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608033, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608033, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 12:00:40.435: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 12:00:40.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:00:41.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9283" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:11.688 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":12,"skipped":198,"failed":0} [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:00:41.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 21 12:00:42.034: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12213382-809e-4ecd-b4c2-4e9aa28c748e" in namespace "downward-api-6789" to be "Succeeded or Failed" Aug 21 12:00:42.043: INFO: Pod "downwardapi-volume-12213382-809e-4ecd-b4c2-4e9aa28c748e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.243393ms Aug 21 12:00:44.050: INFO: Pod "downwardapi-volume-12213382-809e-4ecd-b4c2-4e9aa28c748e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016407375s Aug 21 12:00:46.057: INFO: Pod "downwardapi-volume-12213382-809e-4ecd-b4c2-4e9aa28c748e": Phase="Running", Reason="", readiness=true. Elapsed: 4.023327878s Aug 21 12:00:48.066: INFO: Pod "downwardapi-volume-12213382-809e-4ecd-b4c2-4e9aa28c748e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031796549s STEP: Saw pod success Aug 21 12:00:48.066: INFO: Pod "downwardapi-volume-12213382-809e-4ecd-b4c2-4e9aa28c748e" satisfied condition "Succeeded or Failed" Aug 21 12:00:48.071: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-12213382-809e-4ecd-b4c2-4e9aa28c748e container client-container: STEP: delete the pod Aug 21 12:00:48.255: INFO: Waiting for pod downwardapi-volume-12213382-809e-4ecd-b4c2-4e9aa28c748e to disappear Aug 21 12:00:48.303: INFO: Pod downwardapi-volume-12213382-809e-4ecd-b4c2-4e9aa28c748e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:00:48.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6789" for this suite. • [SLOW TEST:6.453 seconds] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":198,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:00:48.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 21 12:00:54.392: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 21 12:00:57.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:00:59.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:01:01.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608054, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 21 12:01:04.540: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:01:05.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5954" for this suite. STEP: Destroying namespace "webhook-5954-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.987 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":14,"skipped":202,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:01:05.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:01:22.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8471" for this suite. • [SLOW TEST:17.532 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":15,"skipped":209,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:01:22.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:01:27.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8113" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":16,"skipped":211,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:01:27.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 12:01:28.471: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 21 12:01:33.710: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 21 12:01:33.711: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 21 12:01:33.989: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6046 /apis/apps/v1/namespaces/deployment-6046/deployments/test-cleanup-deployment cfccea2e-d41f-417f-9f2e-0bf186c0ac16 2105956 1 2020-08-21 12:01:33 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-08-21 12:01:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40036293c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Aug 21 12:01:34.206: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-6046 /apis/apps/v1/namespaces/deployment-6046/replicasets/test-cleanup-deployment-b4867b47f 0b63fd14-2b99-45a9-b8cf-d72e4eafa32d 2105960 1 2020-08-21 12:01:33 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment cfccea2e-d41f-417f-9f2e-0bf186c0ac16 0x40034acb40 0x40034acb41}] [] [{kube-controller-manager Update apps/v1 2020-08-21 12:01:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 102 99 99 101 97 50 101 45 100 52 49 102 45 52 49 55 102 45 57 102 50 101 45 48 98 102 49 56 54 99 48 97 99 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40034acbb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 21 12:01:34.206: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 21 12:01:34.207: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6046 /apis/apps/v1/namespaces/deployment-6046/replicasets/test-cleanup-controller 1bf6f703-eeab-4bef-bf8e-80347d37c637 2105959 1 2020-08-21 12:01:28 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment cfccea2e-d41f-417f-9f2e-0bf186c0ac16 0x40034aca1f 0x40034aca30}] [] [{e2e.test Update apps/v1 2020-08-21 12:01:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 12:01:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 102 99 99 101 97 50 101 45 100 52 49 102 45 52 49 55 102 45 57 102 50 101 45 48 98 102 49 56 54 99 48 97 99 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x40034acac8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 21 12:01:34.331: INFO: Pod "test-cleanup-controller-hx2qb" is available: &Pod{ObjectMeta:{test-cleanup-controller-hx2qb test-cleanup-controller- deployment-6046 /api/v1/namespaces/deployment-6046/pods/test-cleanup-controller-hx2qb 26ab61b9-ba1b-43c3-adc3-e13da475ad27 2105945 0 2020-08-21 12:01:28 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 1bf6f703-eeab-4bef-bf8e-80347d37c637 0x40036298a7 0x40036298a8}] [] [{kube-controller-manager Update v1 2020-08-21 12:01:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 98 102 54 102 55 48 51 45 101 101 97 98 45 52 98 101 102 45 98 102 56 101 45 56 48 51 52 55 100 51 55 99 54 51 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 12:01:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 53 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4dpmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4dpmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4dpmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:01:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:01:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:01:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.252,StartTime:2020-08-21 12:01:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 12:01:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8045e907c77b6c91806bc6e458aafdd0000a4220d22b9becb570197d5614d22e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 21 12:01:34.333: INFO: Pod "test-cleanup-deployment-b4867b47f-pccpr" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-pccpr test-cleanup-deployment-b4867b47f- deployment-6046 /api/v1/namespaces/deployment-6046/pods/test-cleanup-deployment-b4867b47f-pccpr 338bb612-cf51-4697-9484-25b80f91d49c 2105965 0 2020-08-21 12:01:33 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 0b63fd14-2b99-45a9-b8cf-d72e4eafa32d 0x4003629a60 0x4003629a61}] [] [{kube-controller-manager Update v1 2020-08-21 12:01:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 98 54 51 102 100 49 52 45 50 98 57 57 45 52 53 97 57 45 98 56 99 102 45 100 55 50 101 52 101 97 102 97 51 50 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4dpmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4dpmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4dpmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:01:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:01:34.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6046" for this suite. • [SLOW TEST:6.562 seconds] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":17,"skipped":216,"failed":0} [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:01:34.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 12:01:34.571: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 21 12:01:34.745: INFO: Number of nodes with available pods: 0 Aug 21 12:01:34.745: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 21 12:01:34.978: INFO: Number of nodes with available pods: 0 Aug 21 12:01:34.978: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:36.110: INFO: Number of nodes with available pods: 0 Aug 21 12:01:36.110: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:36.985: INFO: Number of nodes with available pods: 0 Aug 21 12:01:36.985: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:37.987: INFO: Number of nodes with available pods: 0 Aug 21 12:01:37.988: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:38.985: INFO: Number of nodes with available pods: 0 Aug 21 12:01:38.985: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:40.352: INFO: Number of nodes with available pods: 1 Aug 21 12:01:40.352: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 21 12:01:40.507: INFO: Number of nodes with available pods: 1 Aug 21 12:01:40.507: INFO: Number of running nodes: 0, number of available pods: 1 Aug 21 12:01:41.516: INFO: Number of nodes with available pods: 0 Aug 21 12:01:41.516: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 21 12:01:41.559: INFO: Number of nodes with available pods: 0 Aug 21 12:01:41.559: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:42.567: INFO: Number of nodes with available pods: 0 Aug 21 12:01:42.567: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:43.567: INFO: Number of nodes with available pods: 0 Aug 21 12:01:43.567: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:44.567: INFO: Number of nodes with available pods: 0 Aug 21 12:01:44.567: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:45.566: INFO: Number of nodes with available pods: 0 Aug 21 12:01:45.566: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:46.566: INFO: Number of nodes with available pods: 0 Aug 21 12:01:46.566: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:47.567: INFO: Number of nodes with available pods: 0 Aug 21 12:01:47.567: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:48.566: INFO: Number of nodes with available pods: 0 Aug 21 12:01:48.566: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:49.568: INFO: Number of nodes with available pods: 0 Aug 21 12:01:49.568: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:50.566: INFO: Number of nodes with available pods: 0 Aug 21 12:01:50.566: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:51.567: INFO: Number of nodes with available pods: 0 Aug 21 12:01:51.567: INFO: Node kali-worker is running more than one daemon pod Aug 21 12:01:52.567: INFO: Number of nodes with available pods: 1 Aug 21 12:01:52.567: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1688, will wait for the garbage collector to delete the pods Aug 21 12:01:52.652: INFO: Deleting DaemonSet.extensions daemon-set took: 12.517112ms Aug 21 12:01:52.755: INFO: Terminating DaemonSet.extensions daemon-set pods took: 102.931041ms Aug 21 12:01:59.481: INFO: Number of nodes with available pods: 0 Aug 21 12:01:59.481: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 12:01:59.509: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1688/daemonsets","resourceVersion":"2106125"},"items":null} Aug 21 12:01:59.515: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1688/pods","resourceVersion":"2106127"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:01:59.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1688" for this suite. • [SLOW TEST:25.200 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":18,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:01:59.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Aug 21 12:01:59.659: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:03:47.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3804" for this suite. • [SLOW TEST:107.527 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":19,"skipped":253,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:03:47.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 12:03:47.253: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-82f0f50c-c648-40fd-a831-50f5aeb24576" in namespace "security-context-test-7468" to be "Succeeded or Failed" Aug 21 12:03:47.268: INFO: Pod "alpine-nnp-false-82f0f50c-c648-40fd-a831-50f5aeb24576": Phase="Pending", Reason="", readiness=false. Elapsed: 15.208901ms Aug 21 12:03:49.275: INFO: Pod "alpine-nnp-false-82f0f50c-c648-40fd-a831-50f5aeb24576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022314258s Aug 21 12:03:51.283: INFO: Pod "alpine-nnp-false-82f0f50c-c648-40fd-a831-50f5aeb24576": Phase="Running", Reason="", readiness=true. Elapsed: 4.030058136s Aug 21 12:03:53.289: INFO: Pod "alpine-nnp-false-82f0f50c-c648-40fd-a831-50f5aeb24576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036820418s Aug 21 12:03:53.290: INFO: Pod "alpine-nnp-false-82f0f50c-c648-40fd-a831-50f5aeb24576" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:03:53.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7468" for this suite. • [SLOW TEST:6.212 seconds] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":280,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:03:53.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 12:03:53.849: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 21 12:03:58.858: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 21 12:03:58.858: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 21 12:04:00.866: INFO: Creating deployment "test-rollover-deployment" Aug 21 12:04:00.891: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 21 12:04:02.909: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 21 12:04:02.941: INFO: Ensure that both replica sets have 1 created replica Aug 21 12:04:02.967: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 21 12:04:02.999: INFO: Updating deployment test-rollover-deployment Aug 21 12:04:02.999: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 21 12:04:05.310: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 21 12:04:05.321: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 21 12:04:05.611: INFO: all replica sets need to contain the pod-template-hash label Aug 21 12:04:05.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608243, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:04:07.627: INFO: all replica sets need to contain the pod-template-hash label Aug 21 12:04:07.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608243, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:04:10.050: INFO: all replica sets need to contain the pod-template-hash label Aug 21 12:04:10.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608247, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:04:11.625: INFO: all replica sets need to contain the pod-template-hash label Aug 21 12:04:11.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608247, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:04:13.625: INFO: all replica sets need to contain the pod-template-hash label Aug 21 12:04:13.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608247, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:04:15.661: INFO: all replica sets need to contain the pod-template-hash label Aug 21 12:04:15.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608247, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:04:17.638: INFO: all replica sets need to contain the pod-template-hash label Aug 21 12:04:17.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608247, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608240, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 12:04:19.630: INFO: Aug 21 12:04:19.630: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 21 12:04:19.841: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9820 /apis/apps/v1/namespaces/deployment-9820/deployments/test-rollover-deployment a960dc05-2185-4027-a2f0-01fa0ee810fb 2106812 2 2020-08-21 12:04:00 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-21 12:04:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 12:04:18 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4004899928 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-21 12:04:00 +0000 UTC,LastTransitionTime:2020-08-21 12:04:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-08-21 12:04:18 +0000 UTC,LastTransitionTime:2020-08-21 12:04:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 21 12:04:19.849: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b deployment-9820 /apis/apps/v1/namespaces/deployment-9820/replicasets/test-rollover-deployment-84f7f6f64b 8868247a-8513-47a7-8758-baf095cca7aa 2106801 2 2020-08-21 12:04:03 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a960dc05-2185-4027-a2f0-01fa0ee810fb 0x400524da57 0x400524da58}] [] [{kube-controller-manager Update apps/v1 2020-08-21 12:04:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 54 48 100 99 48 53 45 50 49 56 53 45 52 48 50 55 45 97 50 102 48 45 48 49 102 97 48 101 101 56 49 48 102 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400524dae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 21 12:04:19.849: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 21 12:04:19.850: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9820 /apis/apps/v1/namespaces/deployment-9820/replicasets/test-rollover-controller 49b9edbf-c9a4-495d-935e-93aa366739a2 2106811 2 2020-08-21 12:03:53 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a960dc05-2185-4027-a2f0-01fa0ee810fb 0x400524d83f 0x400524d850}] [] [{e2e.test Update apps/v1 2020-08-21 12:03:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 12:04:18 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 54 48 100 99 48 53 45 50 49 56 53 45 52 48 50 55 45 97 50 102 48 45 48 49 102 97 48 101 101 56 49 48 102 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x400524d8e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 21 12:04:19.852: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-9820 /apis/apps/v1/namespaces/deployment-9820/replicasets/test-rollover-deployment-5686c4cfd5 aca742e8-aa2e-4e16-b559-8902122a2851 2106747 2 2020-08-21 12:04:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a960dc05-2185-4027-a2f0-01fa0ee810fb 0x400524d957 0x400524d958}] [] [{kube-controller-manager Update apps/v1 2020-08-21 12:04:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 57 54 48 100 99 48 53 45 50 49 56 53 45 52 48 50 55 45 97 50 102 48 45 48 49 102 97 48 101 101 56 49 48 102 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400524d9e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 21 12:04:19.860: INFO: Pod "test-rollover-deployment-84f7f6f64b-mj5dv" is available: &Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-mj5dv test-rollover-deployment-84f7f6f64b- deployment-9820 /api/v1/namespaces/deployment-9820/pods/test-rollover-deployment-84f7f6f64b-mj5dv 4e82ac89-f2c4-4fc8-97b1-6a604fdeba6f 2106769 0 2020-08-21 12:04:03 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 8868247a-8513-47a7-8758-baf095cca7aa 0x4004869827 0x4004869828}] [] [{kube-controller-manager Update v1 2020-08-21 12:04:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 56 54 56 50 52 55 97 45 56 53 49 51 45 52 55 97 55 45 56 55 53 56 45 98 97 102 48 57 53 99 99 97 55 97 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 12:04:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5mttx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5mttx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5mttx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:04:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:04:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.3,StartTime:2020-08-21 12:04:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 12:04:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://22f06aed3876533aea45a16e2ebf69d633c9ea32a3ecefa15f8a58121fe7376c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:04:19.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9820" for this suite. • [SLOW TEST:26.552 seconds] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":21,"skipped":296,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:04:19.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 21 12:04:20.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88d5b999-20e3-41d6-a1f9-0a75d2cd0367" in namespace "projected-4727" to be "Succeeded or Failed" Aug 21 12:04:20.414: INFO: Pod "downwardapi-volume-88d5b999-20e3-41d6-a1f9-0a75d2cd0367": Phase="Pending", Reason="", readiness=false. Elapsed: 20.753696ms Aug 21 12:04:22.485: INFO: Pod "downwardapi-volume-88d5b999-20e3-41d6-a1f9-0a75d2cd0367": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092123669s Aug 21 12:04:24.580: INFO: Pod "downwardapi-volume-88d5b999-20e3-41d6-a1f9-0a75d2cd0367": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18710225s Aug 21 12:04:26.586: INFO: Pod "downwardapi-volume-88d5b999-20e3-41d6-a1f9-0a75d2cd0367": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.192894251s STEP: Saw pod success Aug 21 12:04:26.586: INFO: Pod "downwardapi-volume-88d5b999-20e3-41d6-a1f9-0a75d2cd0367" satisfied condition "Succeeded or Failed" Aug 21 12:04:26.592: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-88d5b999-20e3-41d6-a1f9-0a75d2cd0367 container client-container: STEP: delete the pod Aug 21 12:04:26.722: INFO: Waiting for pod downwardapi-volume-88d5b999-20e3-41d6-a1f9-0a75d2cd0367 to disappear Aug 21 12:04:26.729: INFO: Pod downwardapi-volume-88d5b999-20e3-41d6-a1f9-0a75d2cd0367 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:04:26.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4727" for this suite. • [SLOW TEST:6.866 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:04:26.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 21 12:04:31.958: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:04:32.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2647" for this suite. • [SLOW TEST:5.376 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":23,"skipped":351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:04:32.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 21 12:04:56.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4646" for this suite. • [SLOW TEST:24.223 seconds] [sig-apps] Job /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":24,"skipped":396,"failed":0} [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 21 12:04:56.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 21 12:04:56.761: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 21 12:04:58.786: INFO: Pod name wrapped-volume-race-13ff134c-9a5e-486e-80de-f311d6deb49a: Found 0 pods out of 5
Aug 21 12:05:03.822: INFO: Pod name wrapped-volume-race-13ff134c-9a5e-486e-80de-f311d6deb49a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-13ff134c-9a5e-486e-80de-f311d6deb49a in namespace emptydir-wrapper-7645, will wait for the garbage collector to delete the pods
Aug 21 12:05:20.111: INFO: Deleting ReplicationController wrapped-volume-race-13ff134c-9a5e-486e-80de-f311d6deb49a took: 41.566799ms
Aug 21 12:05:20.512: INFO: Terminating ReplicationController wrapped-volume-race-13ff134c-9a5e-486e-80de-f311d6deb49a pods took: 400.779266ms
STEP: Creating RC which spawns configmap-volume pods
Aug 21 12:05:40.072: INFO: Pod name wrapped-volume-race-5536f406-da72-48f4-be45-26e4b383d00f: Found 0 pods out of 5
Aug 21 12:05:45.094: INFO: Pod name wrapped-volume-race-5536f406-da72-48f4-be45-26e4b383d00f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5536f406-da72-48f4-be45-26e4b383d00f in namespace emptydir-wrapper-7645, will wait for the garbage collector to delete the pods
Aug 21 12:06:04.022: INFO: Deleting ReplicationController wrapped-volume-race-5536f406-da72-48f4-be45-26e4b383d00f took: 10.534737ms
Aug 21 12:06:04.723: INFO: Terminating ReplicationController wrapped-volume-race-5536f406-da72-48f4-be45-26e4b383d00f pods took: 700.903779ms
STEP: Creating RC which spawns configmap-volume pods
Aug 21 12:06:21.684: INFO: Pod name wrapped-volume-race-6abdad17-5496-43b0-85de-481365d95224: Found 0 pods out of 5
Aug 21 12:06:26.704: INFO: Pod name wrapped-volume-race-6abdad17-5496-43b0-85de-481365d95224: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6abdad17-5496-43b0-85de-481365d95224 in namespace emptydir-wrapper-7645, will wait for the garbage collector to delete the pods
Aug 21 12:06:43.182: INFO: Deleting ReplicationController wrapped-volume-race-6abdad17-5496-43b0-85de-481365d95224 took: 9.127146ms
Aug 21 12:06:43.583: INFO: Terminating ReplicationController wrapped-volume-race-6abdad17-5496-43b0-85de-481365d95224 pods took: 400.944929ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:07:00.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7645" for this suite.

• [SLOW TEST:123.342 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":26,"skipped":399,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:07:00.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 21 12:07:07.233: INFO: Successfully updated pod "adopt-release-89xcn"
STEP: Checking that the Job readopts the Pod
Aug 21 12:07:07.233: INFO: Waiting up to 15m0s for pod "adopt-release-89xcn" in namespace "job-2496" to be "adopted"
Aug 21 12:07:07.434: INFO: Pod "adopt-release-89xcn": Phase="Running", Reason="", readiness=true. Elapsed: 200.447654ms
Aug 21 12:07:09.443: INFO: Pod "adopt-release-89xcn": Phase="Running", Reason="", readiness=true. Elapsed: 2.209858371s
Aug 21 12:07:09.444: INFO: Pod "adopt-release-89xcn" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 21 12:07:09.959: INFO: Successfully updated pod "adopt-release-89xcn"
STEP: Checking that the Job releases the Pod
Aug 21 12:07:09.959: INFO: Waiting up to 15m0s for pod "adopt-release-89xcn" in namespace "job-2496" to be "released"
Aug 21 12:07:09.977: INFO: Pod "adopt-release-89xcn": Phase="Running", Reason="", readiness=true. Elapsed: 17.530727ms
Aug 21 12:07:11.992: INFO: Pod "adopt-release-89xcn": Phase="Running", Reason="", readiness=true. Elapsed: 2.032156838s
Aug 21 12:07:11.992: INFO: Pod "adopt-release-89xcn" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:07:11.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2496" for this suite.

• [SLOW TEST:11.853 seconds]
[sig-apps] Job
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":27,"skipped":421,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:07:12.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:07:12.456: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:07:19.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6240" for this suite.

• [SLOW TEST:7.087 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":28,"skipped":424,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:07:19.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:07:19.305: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38f2d426-737e-4d2d-acaf-a9b57e82dfd8" in namespace "projected-6151" to be "Succeeded or Failed"
Aug 21 12:07:19.325: INFO: Pod "downwardapi-volume-38f2d426-737e-4d2d-acaf-a9b57e82dfd8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.666278ms
Aug 21 12:07:21.476: INFO: Pod "downwardapi-volume-38f2d426-737e-4d2d-acaf-a9b57e82dfd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171245879s
Aug 21 12:07:23.484: INFO: Pod "downwardapi-volume-38f2d426-737e-4d2d-acaf-a9b57e82dfd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178589926s
Aug 21 12:07:25.491: INFO: Pod "downwardapi-volume-38f2d426-737e-4d2d-acaf-a9b57e82dfd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18580136s
STEP: Saw pod success
Aug 21 12:07:25.491: INFO: Pod "downwardapi-volume-38f2d426-737e-4d2d-acaf-a9b57e82dfd8" satisfied condition "Succeeded or Failed"
Aug 21 12:07:25.501: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-38f2d426-737e-4d2d-acaf-a9b57e82dfd8 container client-container: 
STEP: delete the pod
Aug 21 12:07:25.551: INFO: Waiting for pod downwardapi-volume-38f2d426-737e-4d2d-acaf-a9b57e82dfd8 to disappear
Aug 21 12:07:25.563: INFO: Pod downwardapi-volume-38f2d426-737e-4d2d-acaf-a9b57e82dfd8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:07:25.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6151" for this suite.

• [SLOW TEST:6.408 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":424,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:07:25.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-761084c2-7e05-42e0-bb40-05c73b313650
STEP: Creating a pod to test consume configMaps
Aug 21 12:07:25.685: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58894c34-d41c-42db-b717-b5d1d15874a9" in namespace "projected-4155" to be "Succeeded or Failed"
Aug 21 12:07:25.739: INFO: Pod "pod-projected-configmaps-58894c34-d41c-42db-b717-b5d1d15874a9": Phase="Pending", Reason="", readiness=false. Elapsed: 54.433861ms
Aug 21 12:07:27.901: INFO: Pod "pod-projected-configmaps-58894c34-d41c-42db-b717-b5d1d15874a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21621579s
Aug 21 12:07:29.907: INFO: Pod "pod-projected-configmaps-58894c34-d41c-42db-b717-b5d1d15874a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221978266s
Aug 21 12:07:31.913: INFO: Pod "pod-projected-configmaps-58894c34-d41c-42db-b717-b5d1d15874a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.227753204s
STEP: Saw pod success
Aug 21 12:07:31.913: INFO: Pod "pod-projected-configmaps-58894c34-d41c-42db-b717-b5d1d15874a9" satisfied condition "Succeeded or Failed"
Aug 21 12:07:31.916: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-58894c34-d41c-42db-b717-b5d1d15874a9 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 12:07:32.011: INFO: Waiting for pod pod-projected-configmaps-58894c34-d41c-42db-b717-b5d1d15874a9 to disappear
Aug 21 12:07:32.170: INFO: Pod pod-projected-configmaps-58894c34-d41c-42db-b717-b5d1d15874a9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:07:32.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4155" for this suite.

• [SLOW TEST:6.749 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":427,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:07:32.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-28770a7c-0bc1-4eaa-9fd4-3cf4931b3ff9
STEP: Creating a pod to test consume configMaps
Aug 21 12:07:34.068: INFO: Waiting up to 5m0s for pod "pod-configmaps-14f45491-5cf8-4b34-94e0-1ea315f5c1b1" in namespace "configmap-2475" to be "Succeeded or Failed"
Aug 21 12:07:34.112: INFO: Pod "pod-configmaps-14f45491-5cf8-4b34-94e0-1ea315f5c1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 44.360084ms
Aug 21 12:07:36.377: INFO: Pod "pod-configmaps-14f45491-5cf8-4b34-94e0-1ea315f5c1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309366961s
Aug 21 12:07:38.383: INFO: Pod "pod-configmaps-14f45491-5cf8-4b34-94e0-1ea315f5c1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315009846s
Aug 21 12:07:40.390: INFO: Pod "pod-configmaps-14f45491-5cf8-4b34-94e0-1ea315f5c1b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.322483688s
STEP: Saw pod success
Aug 21 12:07:40.391: INFO: Pod "pod-configmaps-14f45491-5cf8-4b34-94e0-1ea315f5c1b1" satisfied condition "Succeeded or Failed"
Aug 21 12:07:40.396: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-14f45491-5cf8-4b34-94e0-1ea315f5c1b1 container configmap-volume-test: 
STEP: delete the pod
Aug 21 12:07:40.470: INFO: Waiting for pod pod-configmaps-14f45491-5cf8-4b34-94e0-1ea315f5c1b1 to disappear
Aug 21 12:07:40.474: INFO: Pod pod-configmaps-14f45491-5cf8-4b34-94e0-1ea315f5c1b1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:07:40.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2475" for this suite.

• [SLOW TEST:8.145 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":448,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:07:40.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 21 12:07:47.541: INFO: Successfully updated pod "pod-update-activedeadlineseconds-005ac7e3-9ddc-4d44-b924-4ffb12a0ea42"
Aug 21 12:07:47.542: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-005ac7e3-9ddc-4d44-b924-4ffb12a0ea42" in namespace "pods-2563" to be "terminated due to deadline exceeded"
Aug 21 12:07:47.592: INFO: Pod "pod-update-activedeadlineseconds-005ac7e3-9ddc-4d44-b924-4ffb12a0ea42": Phase="Running", Reason="", readiness=true. Elapsed: 50.14961ms
Aug 21 12:07:49.745: INFO: Pod "pod-update-activedeadlineseconds-005ac7e3-9ddc-4d44-b924-4ffb12a0ea42": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.203631416s
Aug 21 12:07:49.746: INFO: Pod "pod-update-activedeadlineseconds-005ac7e3-9ddc-4d44-b924-4ffb12a0ea42" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:07:49.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2563" for this suite.

• [SLOW TEST:9.410 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":465,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:07:49.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-f668fd97-f61f-441f-82f0-b8602672e742
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-f668fd97-f61f-441f-82f0-b8602672e742
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:08:56.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9665" for this suite.

• [SLOW TEST:67.088 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":474,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:08:56.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 21 12:08:57.059: INFO: >>> kubeConfig: /root/.kube/config
Aug 21 12:09:07.366: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:10:26.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1014" for this suite.

• [SLOW TEST:90.140 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":34,"skipped":520,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:10:27.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 21 12:10:34.381: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9894 PodName:pod-sharedvolume-1e807357-1a27-427a-94ab-4f2fa9bae6df ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:10:34.381: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:10:34.442712      10 log.go:172] (0x40063fe4d0) (0x400297d400) Create stream
I0821 12:10:34.442904      10 log.go:172] (0x40063fe4d0) (0x400297d400) Stream added, broadcasting: 1
I0821 12:10:34.446144      10 log.go:172] (0x40063fe4d0) Reply frame received for 1
I0821 12:10:34.446375      10 log.go:172] (0x40063fe4d0) (0x400297d4a0) Create stream
I0821 12:10:34.446459      10 log.go:172] (0x40063fe4d0) (0x400297d4a0) Stream added, broadcasting: 3
I0821 12:10:34.448356      10 log.go:172] (0x40063fe4d0) Reply frame received for 3
I0821 12:10:34.448544      10 log.go:172] (0x40063fe4d0) (0x40028e0b40) Create stream
I0821 12:10:34.448631      10 log.go:172] (0x40063fe4d0) (0x40028e0b40) Stream added, broadcasting: 5
I0821 12:10:34.450089      10 log.go:172] (0x40063fe4d0) Reply frame received for 5
I0821 12:10:34.509155      10 log.go:172] (0x40063fe4d0) Data frame received for 5
I0821 12:10:34.509329      10 log.go:172] (0x40028e0b40) (5) Data frame handling
I0821 12:10:34.509471      10 log.go:172] (0x40063fe4d0) Data frame received for 3
I0821 12:10:34.509614      10 log.go:172] (0x400297d4a0) (3) Data frame handling
I0821 12:10:34.509731      10 log.go:172] (0x400297d4a0) (3) Data frame sent
I0821 12:10:34.509812      10 log.go:172] (0x40063fe4d0) Data frame received for 3
I0821 12:10:34.509926      10 log.go:172] (0x400297d4a0) (3) Data frame handling
I0821 12:10:34.510482      10 log.go:172] (0x40063fe4d0) Data frame received for 1
I0821 12:10:34.510577      10 log.go:172] (0x400297d400) (1) Data frame handling
I0821 12:10:34.510659      10 log.go:172] (0x400297d400) (1) Data frame sent
I0821 12:10:34.510734      10 log.go:172] (0x40063fe4d0) (0x400297d400) Stream removed, broadcasting: 1
I0821 12:10:34.510829      10 log.go:172] (0x40063fe4d0) Go away received
I0821 12:10:34.511230      10 log.go:172] (0x40063fe4d0) (0x400297d400) Stream removed, broadcasting: 1
I0821 12:10:34.511339      10 log.go:172] (0x40063fe4d0) (0x400297d4a0) Stream removed, broadcasting: 3
I0821 12:10:34.511432      10 log.go:172] (0x40063fe4d0) (0x40028e0b40) Stream removed, broadcasting: 5
Aug 21 12:10:34.511: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:10:34.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9894" for this suite.

• [SLOW TEST:7.388 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":35,"skipped":527,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:10:34.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-6163
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6163 to expose endpoints map[]
Aug 21 12:10:34.651: INFO: Get endpoints failed (24.984824ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 21 12:10:35.660: INFO: successfully validated that service endpoint-test2 in namespace services-6163 exposes endpoints map[] (1.033323331s elapsed)
STEP: Creating pod pod1 in namespace services-6163
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6163 to expose endpoints map[pod1:[80]]
Aug 21 12:10:39.897: INFO: successfully validated that service endpoint-test2 in namespace services-6163 exposes endpoints map[pod1:[80]] (4.227122359s elapsed)
STEP: Creating pod pod2 in namespace services-6163
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6163 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 21 12:10:46.311: INFO: successfully validated that service endpoint-test2 in namespace services-6163 exposes endpoints map[pod1:[80] pod2:[80]] (6.405418171s elapsed)
STEP: Deleting pod pod1 in namespace services-6163
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6163 to expose endpoints map[pod2:[80]]
Aug 21 12:10:46.426: INFO: successfully validated that service endpoint-test2 in namespace services-6163 exposes endpoints map[pod2:[80]] (107.770628ms elapsed)
STEP: Deleting pod pod2 in namespace services-6163
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6163 to expose endpoints map[]
Aug 21 12:10:47.514: INFO: successfully validated that service endpoint-test2 in namespace services-6163 exposes endpoints map[] (1.08213831s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:10:47.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6163" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.044 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":36,"skipped":569,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:10:47.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:10:50.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:10:55.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8595" for this suite.

• [SLOW TEST:8.239 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":574,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:10:55.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:10:56.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82c6a323-b435-469c-94c2-342f37f15bba" in namespace "projected-6963" to be "Succeeded or Failed"
Aug 21 12:10:56.706: INFO: Pod "downwardapi-volume-82c6a323-b435-469c-94c2-342f37f15bba": Phase="Pending", Reason="", readiness=false. Elapsed: 289.604447ms
Aug 21 12:10:58.713: INFO: Pod "downwardapi-volume-82c6a323-b435-469c-94c2-342f37f15bba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296659382s
Aug 21 12:11:00.736: INFO: Pod "downwardapi-volume-82c6a323-b435-469c-94c2-342f37f15bba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320069248s
Aug 21 12:11:02.745: INFO: Pod "downwardapi-volume-82c6a323-b435-469c-94c2-342f37f15bba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.328493889s
STEP: Saw pod success
Aug 21 12:11:02.745: INFO: Pod "downwardapi-volume-82c6a323-b435-469c-94c2-342f37f15bba" satisfied condition "Succeeded or Failed"
Aug 21 12:11:02.750: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-82c6a323-b435-469c-94c2-342f37f15bba container client-container: 
STEP: delete the pod
Aug 21 12:11:02.929: INFO: Waiting for pod downwardapi-volume-82c6a323-b435-469c-94c2-342f37f15bba to disappear
Aug 21 12:11:02.934: INFO: Pod downwardapi-volume-82c6a323-b435-469c-94c2-342f37f15bba no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:11:02.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6963" for this suite.

• [SLOW TEST:7.135 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":581,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:11:02.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-edc0ab81-06fb-4de8-8b20-30efc91f6a5f
STEP: Creating a pod to test consume secrets
Aug 21 12:11:03.090: INFO: Waiting up to 5m0s for pod "pod-secrets-2c7af8c7-e0e6-4f3f-b4a5-ffbd71f1fd09" in namespace "secrets-6025" to be "Succeeded or Failed"
Aug 21 12:11:03.173: INFO: Pod "pod-secrets-2c7af8c7-e0e6-4f3f-b4a5-ffbd71f1fd09": Phase="Pending", Reason="", readiness=false. Elapsed: 83.174714ms
Aug 21 12:11:05.382: INFO: Pod "pod-secrets-2c7af8c7-e0e6-4f3f-b4a5-ffbd71f1fd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292337015s
Aug 21 12:11:07.415: INFO: Pod "pod-secrets-2c7af8c7-e0e6-4f3f-b4a5-ffbd71f1fd09": Phase="Running", Reason="", readiness=true. Elapsed: 4.325312105s
Aug 21 12:11:09.467: INFO: Pod "pod-secrets-2c7af8c7-e0e6-4f3f-b4a5-ffbd71f1fd09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.376537861s
STEP: Saw pod success
Aug 21 12:11:09.467: INFO: Pod "pod-secrets-2c7af8c7-e0e6-4f3f-b4a5-ffbd71f1fd09" satisfied condition "Succeeded or Failed"
Aug 21 12:11:09.791: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-2c7af8c7-e0e6-4f3f-b4a5-ffbd71f1fd09 container secret-volume-test: 
STEP: delete the pod
Aug 21 12:11:09.983: INFO: Waiting for pod pod-secrets-2c7af8c7-e0e6-4f3f-b4a5-ffbd71f1fd09 to disappear
Aug 21 12:11:10.032: INFO: Pod pod-secrets-2c7af8c7-e0e6-4f3f-b4a5-ffbd71f1fd09 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:11:10.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6025" for this suite.

• [SLOW TEST:7.254 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":584,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:11:10.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:11:22.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6452" for this suite.

• [SLOW TEST:12.058 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":40,"skipped":589,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:11:22.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:11:27.045: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:11:29.068: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608687, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608687, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608687, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608686, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:11:31.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608687, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608687, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608687, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608686, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:11:33.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608687, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608687, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608687, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608686, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:11:36.098: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:11:36.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9636" for this suite.
STEP: Destroying namespace "webhook-9636-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.086 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":41,"skipped":619,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:11:36.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 21 12:11:36.430: INFO: Waiting up to 5m0s for pod "pod-f7027b9c-915c-476b-a040-0cfea7c572ac" in namespace "emptydir-9804" to be "Succeeded or Failed"
Aug 21 12:11:36.472: INFO: Pod "pod-f7027b9c-915c-476b-a040-0cfea7c572ac": Phase="Pending", Reason="", readiness=false. Elapsed: 42.113858ms
Aug 21 12:11:38.480: INFO: Pod "pod-f7027b9c-915c-476b-a040-0cfea7c572ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049886819s
Aug 21 12:11:40.487: INFO: Pod "pod-f7027b9c-915c-476b-a040-0cfea7c572ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057467074s
STEP: Saw pod success
Aug 21 12:11:40.487: INFO: Pod "pod-f7027b9c-915c-476b-a040-0cfea7c572ac" satisfied condition "Succeeded or Failed"
Aug 21 12:11:40.493: INFO: Trying to get logs from node kali-worker pod pod-f7027b9c-915c-476b-a040-0cfea7c572ac container test-container: 
STEP: delete the pod
Aug 21 12:11:40.711: INFO: Waiting for pod pod-f7027b9c-915c-476b-a040-0cfea7c572ac to disappear
Aug 21 12:11:40.720: INFO: Pod pod-f7027b9c-915c-476b-a040-0cfea7c572ac no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:11:40.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9804" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":628,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:11:40.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 21 12:11:40.837: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:11:59.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7162" for this suite.

• [SLOW TEST:18.382 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":665,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:11:59.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:12:01.268: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:12:03.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608721, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608721, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608721, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608721, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:12:05.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608721, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608721, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608721, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608721, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:12:08.349: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:12:08.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6921-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:12:09.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-327" for this suite.
STEP: Destroying namespace "webhook-327-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.746 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":44,"skipped":669,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:12:09.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:12:10.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3208" for this suite.
STEP: Destroying namespace "nspatchtest-b5e2483c-c15e-49d7-a7e0-696093afff96-139" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":45,"skipped":679,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:12:10.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:12:15.541: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:12:17.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608735, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608735, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608735, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608735, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:12:19.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608735, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608735, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608735, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608735, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:12:22.725: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:12:23.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-370" for this suite.
STEP: Destroying namespace "webhook-370-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.770 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":46,"skipped":680,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:12:23.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-0abf04af-fa17-43e9-a4bb-1a23c6f21b2c
STEP: Creating a pod to test consume configMaps
Aug 21 12:12:23.606: INFO: Waiting up to 5m0s for pod "pod-configmaps-a84d24b1-b1e8-48dc-9a1a-b93abbb86fa7" in namespace "configmap-4943" to be "Succeeded or Failed"
Aug 21 12:12:23.684: INFO: Pod "pod-configmaps-a84d24b1-b1e8-48dc-9a1a-b93abbb86fa7": Phase="Pending", Reason="", readiness=false. Elapsed: 77.416204ms
Aug 21 12:12:25.691: INFO: Pod "pod-configmaps-a84d24b1-b1e8-48dc-9a1a-b93abbb86fa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085030636s
Aug 21 12:12:27.698: INFO: Pod "pod-configmaps-a84d24b1-b1e8-48dc-9a1a-b93abbb86fa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091584265s
STEP: Saw pod success
Aug 21 12:12:27.698: INFO: Pod "pod-configmaps-a84d24b1-b1e8-48dc-9a1a-b93abbb86fa7" satisfied condition "Succeeded or Failed"
Aug 21 12:12:27.702: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-a84d24b1-b1e8-48dc-9a1a-b93abbb86fa7 container configmap-volume-test: 
STEP: delete the pod
Aug 21 12:12:27.726: INFO: Waiting for pod pod-configmaps-a84d24b1-b1e8-48dc-9a1a-b93abbb86fa7 to disappear
Aug 21 12:12:27.771: INFO: Pod pod-configmaps-a84d24b1-b1e8-48dc-9a1a-b93abbb86fa7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:12:27.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4943" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":683,"failed":0}

------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:12:27.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:12:27.841: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d742d42b-e3c3-47e4-8bd0-46fb7c26307f" in namespace "security-context-test-5208" to be "Succeeded or Failed"
Aug 21 12:12:27.867: INFO: Pod "busybox-readonly-false-d742d42b-e3c3-47e4-8bd0-46fb7c26307f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.705834ms
Aug 21 12:12:29.874: INFO: Pod "busybox-readonly-false-d742d42b-e3c3-47e4-8bd0-46fb7c26307f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032701186s
Aug 21 12:12:31.882: INFO: Pod "busybox-readonly-false-d742d42b-e3c3-47e4-8bd0-46fb7c26307f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040516591s
Aug 21 12:12:31.882: INFO: Pod "busybox-readonly-false-d742d42b-e3c3-47e4-8bd0-46fb7c26307f" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:12:31.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5208" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":683,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:12:31.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:12:36.497: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:12:38.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608756, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608756, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608756, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608756, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:12:41.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608756, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608756, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608756, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608756, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:12:44.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:12:57.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1364" for this suite.
STEP: Destroying namespace "webhook-1364-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:25.388 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":49,"skipped":694,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:12:57.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-3875442b-22f7-434f-9a23-45e0b81571fd
STEP: Creating a pod to test consume configMaps
Aug 21 12:12:57.433: INFO: Waiting up to 5m0s for pod "pod-configmaps-c7599ded-bc9f-4461-98e4-2ff9a401c37a" in namespace "configmap-1022" to be "Succeeded or Failed"
Aug 21 12:12:57.438: INFO: Pod "pod-configmaps-c7599ded-bc9f-4461-98e4-2ff9a401c37a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.406993ms
Aug 21 12:12:59.445: INFO: Pod "pod-configmaps-c7599ded-bc9f-4461-98e4-2ff9a401c37a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011401875s
Aug 21 12:13:01.535: INFO: Pod "pod-configmaps-c7599ded-bc9f-4461-98e4-2ff9a401c37a": Phase="Running", Reason="", readiness=true. Elapsed: 4.101767232s
Aug 21 12:13:03.545: INFO: Pod "pod-configmaps-c7599ded-bc9f-4461-98e4-2ff9a401c37a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111196034s
STEP: Saw pod success
Aug 21 12:13:03.545: INFO: Pod "pod-configmaps-c7599ded-bc9f-4461-98e4-2ff9a401c37a" satisfied condition "Succeeded or Failed"
Aug 21 12:13:03.553: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-c7599ded-bc9f-4461-98e4-2ff9a401c37a container configmap-volume-test: 
STEP: delete the pod
Aug 21 12:13:03.629: INFO: Waiting for pod pod-configmaps-c7599ded-bc9f-4461-98e4-2ff9a401c37a to disappear
Aug 21 12:13:03.641: INFO: Pod pod-configmaps-c7599ded-bc9f-4461-98e4-2ff9a401c37a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:13:03.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1022" for this suite.

• [SLOW TEST:6.365 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":721,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:13:03.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 21 12:13:03.762: INFO: Waiting up to 5m0s for pod "pod-65c72f78-5fb1-4233-8751-2b2f0f9ec55e" in namespace "emptydir-2521" to be "Succeeded or Failed"
Aug 21 12:13:03.783: INFO: Pod "pod-65c72f78-5fb1-4233-8751-2b2f0f9ec55e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.121769ms
Aug 21 12:13:05.790: INFO: Pod "pod-65c72f78-5fb1-4233-8751-2b2f0f9ec55e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027725501s
Aug 21 12:13:07.796: INFO: Pod "pod-65c72f78-5fb1-4233-8751-2b2f0f9ec55e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034279181s
STEP: Saw pod success
Aug 21 12:13:07.797: INFO: Pod "pod-65c72f78-5fb1-4233-8751-2b2f0f9ec55e" satisfied condition "Succeeded or Failed"
Aug 21 12:13:07.801: INFO: Trying to get logs from node kali-worker pod pod-65c72f78-5fb1-4233-8751-2b2f0f9ec55e container test-container: 
STEP: delete the pod
Aug 21 12:13:07.822: INFO: Waiting for pod pod-65c72f78-5fb1-4233-8751-2b2f0f9ec55e to disappear
Aug 21 12:13:07.948: INFO: Pod pod-65c72f78-5fb1-4233-8751-2b2f0f9ec55e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:13:07.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2521" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":780,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:13:07.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:13:08.086: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76935b7e-6711-4c55-9aae-f47171e2ad78" in namespace "downward-api-4981" to be "Succeeded or Failed"
Aug 21 12:13:08.091: INFO: Pod "downwardapi-volume-76935b7e-6711-4c55-9aae-f47171e2ad78": Phase="Pending", Reason="", readiness=false. Elapsed: 5.400371ms
Aug 21 12:13:10.099: INFO: Pod "downwardapi-volume-76935b7e-6711-4c55-9aae-f47171e2ad78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012933035s
Aug 21 12:13:12.105: INFO: Pod "downwardapi-volume-76935b7e-6711-4c55-9aae-f47171e2ad78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019036002s
STEP: Saw pod success
Aug 21 12:13:12.105: INFO: Pod "downwardapi-volume-76935b7e-6711-4c55-9aae-f47171e2ad78" satisfied condition "Succeeded or Failed"
Aug 21 12:13:12.109: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-76935b7e-6711-4c55-9aae-f47171e2ad78 container client-container: 
STEP: delete the pod
Aug 21 12:13:12.170: INFO: Waiting for pod downwardapi-volume-76935b7e-6711-4c55-9aae-f47171e2ad78 to disappear
Aug 21 12:13:12.176: INFO: Pod downwardapi-volume-76935b7e-6711-4c55-9aae-f47171e2ad78 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:13:12.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4981" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":783,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:13:12.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-5843
STEP: creating replication controller nodeport-test in namespace services-5843
I0821 12:13:12.349268      10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5843, replica count: 2
I0821 12:13:15.402934      10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 12:13:18.405500      10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 12:13:18.406: INFO: Creating new exec pod
Aug 21 12:13:23.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-5843 execpodrlb6r -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 21 12:13:34.797: INFO: stderr: "I0821 12:13:34.691950     239 log.go:172] (0x40000f2fd0) (0x400080d360) Create stream\nI0821 12:13:34.696299     239 log.go:172] (0x40000f2fd0) (0x400080d360) Stream added, broadcasting: 1\nI0821 12:13:34.711257     239 log.go:172] (0x40000f2fd0) Reply frame received for 1\nI0821 12:13:34.712296     239 log.go:172] (0x40000f2fd0) (0x4000ba6000) Create stream\nI0821 12:13:34.712387     239 log.go:172] (0x40000f2fd0) (0x4000ba6000) Stream added, broadcasting: 3\nI0821 12:13:34.714094     239 log.go:172] (0x40000f2fd0) Reply frame received for 3\nI0821 12:13:34.714324     239 log.go:172] (0x40000f2fd0) (0x40008ca0a0) Create stream\nI0821 12:13:34.714383     239 log.go:172] (0x40000f2fd0) (0x40008ca0a0) Stream added, broadcasting: 5\nI0821 12:13:34.715469     239 log.go:172] (0x40000f2fd0) Reply frame received for 5\nI0821 12:13:34.780586     239 log.go:172] (0x40000f2fd0) Data frame received for 5\nI0821 12:13:34.781064     239 log.go:172] (0x40008ca0a0) (5) Data frame handling\nI0821 12:13:34.781497     239 log.go:172] (0x40008ca0a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0821 12:13:34.781955     239 log.go:172] (0x40000f2fd0) Data frame received for 5\nI0821 12:13:34.782031     239 log.go:172] (0x40008ca0a0) (5) Data frame handling\nI0821 12:13:34.782113     239 log.go:172] (0x40008ca0a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0821 12:13:34.782236     239 log.go:172] (0x40000f2fd0) Data frame received for 3\nI0821 12:13:34.782367     239 log.go:172] (0x4000ba6000) (3) Data frame handling\nI0821 12:13:34.782439     239 log.go:172] (0x40000f2fd0) Data frame received for 5\nI0821 12:13:34.782522     239 log.go:172] (0x40008ca0a0) (5) Data frame handling\nI0821 12:13:34.782575     239 log.go:172] (0x40000f2fd0) Data frame received for 1\nI0821 12:13:34.782639     239 log.go:172] (0x400080d360) (1) Data frame handling\nI0821 12:13:34.782688     239 log.go:172] (0x400080d360) (1) Data frame sent\nI0821 12:13:34.783365     239 log.go:172] (0x40000f2fd0) (0x400080d360) Stream removed, broadcasting: 1\nI0821 12:13:34.785619     239 log.go:172] (0x40000f2fd0) Go away received\nI0821 12:13:34.787275     239 log.go:172] (0x40000f2fd0) (0x400080d360) Stream removed, broadcasting: 1\nI0821 12:13:34.787652     239 log.go:172] (0x40000f2fd0) (0x4000ba6000) Stream removed, broadcasting: 3\nI0821 12:13:34.788243     239 log.go:172] (0x40000f2fd0) (0x40008ca0a0) Stream removed, broadcasting: 5\n"
Aug 21 12:13:34.798: INFO: stdout: ""
Aug 21 12:13:34.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-5843 execpodrlb6r -- /bin/sh -x -c nc -zv -t -w 2 10.110.198.248 80'
Aug 21 12:13:36.349: INFO: stderr: "I0821 12:13:36.250306     273 log.go:172] (0x4000ac2b00) (0x4000984000) Create stream\nI0821 12:13:36.253185     273 log.go:172] (0x4000ac2b00) (0x4000984000) Stream added, broadcasting: 1\nI0821 12:13:36.262934     273 log.go:172] (0x4000ac2b00) Reply frame received for 1\nI0821 12:13:36.264020     273 log.go:172] (0x4000ac2b00) (0x40009a0140) Create stream\nI0821 12:13:36.264141     273 log.go:172] (0x4000ac2b00) (0x40009a0140) Stream added, broadcasting: 3\nI0821 12:13:36.265666     273 log.go:172] (0x4000ac2b00) Reply frame received for 3\nI0821 12:13:36.265895     273 log.go:172] (0x4000ac2b00) (0x40009a01e0) Create stream\nI0821 12:13:36.265945     273 log.go:172] (0x4000ac2b00) (0x40009a01e0) Stream added, broadcasting: 5\nI0821 12:13:36.267328     273 log.go:172] (0x4000ac2b00) Reply frame received for 5\nI0821 12:13:36.325643     273 log.go:172] (0x4000ac2b00) Data frame received for 3\nI0821 12:13:36.326338     273 log.go:172] (0x4000ac2b00) Data frame received for 1\nI0821 12:13:36.326489     273 log.go:172] (0x4000984000) (1) Data frame handling\nI0821 12:13:36.326599     273 log.go:172] (0x40009a0140) (3) Data frame handling\nI0821 12:13:36.327671     273 log.go:172] (0x4000ac2b00) Data frame received for 5\nI0821 12:13:36.327870     273 log.go:172] (0x40009a01e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.198.248 80\nConnection to 10.110.198.248 80 port [tcp/http] succeeded!\nI0821 12:13:36.330723     273 log.go:172] (0x40009a01e0) (5) Data frame sent\nI0821 12:13:36.330948     273 log.go:172] (0x4000984000) (1) Data frame sent\nI0821 12:13:36.331304     273 log.go:172] (0x4000ac2b00) Data frame received for 5\nI0821 12:13:36.331390     273 log.go:172] (0x40009a01e0) (5) Data frame handling\nI0821 12:13:36.332846     273 log.go:172] (0x4000ac2b00) (0x4000984000) Stream removed, broadcasting: 1\nI0821 12:13:36.333349     273 log.go:172] (0x4000ac2b00) Go away received\nI0821 12:13:36.337651     273 log.go:172] (0x4000ac2b00) (0x4000984000) Stream removed, broadcasting: 1\nI0821 12:13:36.337932     273 log.go:172] (0x4000ac2b00) (0x40009a0140) Stream removed, broadcasting: 3\nI0821 12:13:36.338113     273 log.go:172] (0x4000ac2b00) (0x40009a01e0) Stream removed, broadcasting: 5\n"
Aug 21 12:13:36.349: INFO: stdout: ""
Aug 21 12:13:36.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-5843 execpodrlb6r -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 32579'
Aug 21 12:13:37.888: INFO: stderr: "I0821 12:13:37.749009     296 log.go:172] (0x40000f2420) (0x4000b3a140) Create stream\nI0821 12:13:37.751815     296 log.go:172] (0x40000f2420) (0x4000b3a140) Stream added, broadcasting: 1\nI0821 12:13:37.765584     296 log.go:172] (0x40000f2420) Reply frame received for 1\nI0821 12:13:37.767016     296 log.go:172] (0x40000f2420) (0x4000b3a280) Create stream\nI0821 12:13:37.767148     296 log.go:172] (0x40000f2420) (0x4000b3a280) Stream added, broadcasting: 3\nI0821 12:13:37.769665     296 log.go:172] (0x40000f2420) Reply frame received for 3\nI0821 12:13:37.770372     296 log.go:172] (0x40000f2420) (0x4000b3a320) Create stream\nI0821 12:13:37.770497     296 log.go:172] (0x40000f2420) (0x4000b3a320) Stream added, broadcasting: 5\nI0821 12:13:37.772441     296 log.go:172] (0x40000f2420) Reply frame received for 5\nI0821 12:13:37.863831     296 log.go:172] (0x40000f2420) Data frame received for 3\nI0821 12:13:37.864250     296 log.go:172] (0x40000f2420) Data frame received for 5\nI0821 12:13:37.864368     296 log.go:172] (0x4000b3a320) (5) Data frame handling\nI0821 12:13:37.865308     296 log.go:172] (0x4000b3a280) (3) Data frame handling\nI0821 12:13:37.865586     296 log.go:172] (0x40000f2420) Data frame received for 1\nI0821 12:13:37.865666     296 log.go:172] (0x4000b3a140) (1) Data frame handling\nI0821 12:13:37.866026     296 log.go:172] (0x4000b3a140) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.16 32579\nConnection to 172.18.0.16 32579 port [tcp/32579] succeeded!\nI0821 12:13:37.866661     296 log.go:172] (0x4000b3a320) (5) Data frame sent\nI0821 12:13:37.867117     296 log.go:172] (0x40000f2420) Data frame received for 5\nI0821 12:13:37.867220     296 log.go:172] (0x4000b3a320) (5) Data frame handling\nI0821 12:13:37.868482     296 log.go:172] (0x40000f2420) (0x4000b3a140) Stream removed, broadcasting: 1\nI0821 12:13:37.870284     296 log.go:172] (0x40000f2420) Go away received\nI0821 12:13:37.876193     296 log.go:172] (0x40000f2420) (0x4000b3a140) Stream removed, broadcasting: 1\nI0821 12:13:37.876590     296 log.go:172] (0x40000f2420) (0x4000b3a280) Stream removed, broadcasting: 3\nI0821 12:13:37.876990     296 log.go:172] (0x40000f2420) (0x4000b3a320) Stream removed, broadcasting: 5\n"
Aug 21 12:13:37.889: INFO: stdout: ""
Aug 21 12:13:37.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-5843 execpodrlb6r -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32579'
Aug 21 12:13:39.371: INFO: stderr: "I0821 12:13:39.253514     317 log.go:172] (0x4000a7e0b0) (0x40007df540) Create stream\nI0821 12:13:39.259693     317 log.go:172] (0x4000a7e0b0) (0x40007df540) Stream added, broadcasting: 1\nI0821 12:13:39.276568     317 log.go:172] (0x4000a7e0b0) Reply frame received for 1\nI0821 12:13:39.277839     317 log.go:172] (0x4000a7e0b0) (0x40007df5e0) Create stream\nI0821 12:13:39.277945     317 log.go:172] (0x4000a7e0b0) (0x40007df5e0) Stream added, broadcasting: 3\nI0821 12:13:39.279903     317 log.go:172] (0x4000a7e0b0) Reply frame received for 3\nI0821 12:13:39.280342     317 log.go:172] (0x4000a7e0b0) (0x40007df680) Create stream\nI0821 12:13:39.280457     317 log.go:172] (0x4000a7e0b0) (0x40007df680) Stream added, broadcasting: 5\nI0821 12:13:39.282104     317 log.go:172] (0x4000a7e0b0) Reply frame received for 5\nI0821 12:13:39.349435     317 log.go:172] (0x4000a7e0b0) Data frame received for 5\nI0821 12:13:39.349733     317 log.go:172] (0x4000a7e0b0) Data frame received for 3\nI0821 12:13:39.349992     317 log.go:172] (0x4000a7e0b0) Data frame received for 1\nI0821 12:13:39.350148     317 log.go:172] (0x40007df5e0) (3) Data frame handling\nI0821 12:13:39.350266     317 log.go:172] (0x40007df680) (5) Data frame handling\nI0821 12:13:39.350702     317 log.go:172] (0x40007df540) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32579\nConnection to 172.18.0.13 32579 port [tcp/32579] succeeded!\nI0821 12:13:39.353804     317 log.go:172] (0x40007df540) (1) Data frame sent\nI0821 12:13:39.353985     317 log.go:172] (0x40007df680) (5) Data frame sent\nI0821 12:13:39.355131     317 log.go:172] (0x4000a7e0b0) Data frame received for 5\nI0821 12:13:39.355223     317 log.go:172] (0x40007df680) (5) Data frame handling\nI0821 12:13:39.356161     317 log.go:172] (0x4000a7e0b0) (0x40007df540) Stream removed, broadcasting: 1\nI0821 12:13:39.357350     317 log.go:172] (0x4000a7e0b0) Go away received\nI0821 12:13:39.360711     317 log.go:172] (0x4000a7e0b0) (0x40007df540) Stream removed, broadcasting: 1\nI0821 12:13:39.361495     317 log.go:172] (0x4000a7e0b0) (0x40007df5e0) Stream removed, broadcasting: 3\nI0821 12:13:39.361778     317 log.go:172] (0x4000a7e0b0) (0x40007df680) Stream removed, broadcasting: 5\n"
Aug 21 12:13:39.371: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:13:39.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5843" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:27.196 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":53,"skipped":841,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:13:39.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:13:39.553: INFO: Create a RollingUpdate DaemonSet
Aug 21 12:13:39.563: INFO: Check that daemon pods launch on every node of the cluster
Aug 21 12:13:39.573: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:13:39.583: INFO: Number of nodes with available pods: 0
Aug 21 12:13:39.583: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:13:40.600: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:13:40.607: INFO: Number of nodes with available pods: 0
Aug 21 12:13:40.607: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:13:41.938: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:13:42.330: INFO: Number of nodes with available pods: 0
Aug 21 12:13:42.331: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:13:42.732: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:13:42.958: INFO: Number of nodes with available pods: 0
Aug 21 12:13:42.958: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:13:43.652: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:13:43.716: INFO: Number of nodes with available pods: 1
Aug 21 12:13:43.716: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:13:44.611: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:13:44.646: INFO: Number of nodes with available pods: 2
Aug 21 12:13:44.646: INFO: Number of running nodes: 2, number of available pods: 2
Aug 21 12:13:44.646: INFO: Update the DaemonSet to trigger a rollout
Aug 21 12:13:44.663: INFO: Updating DaemonSet daemon-set
Aug 21 12:13:59.722: INFO: Roll back the DaemonSet before rollout is complete
Aug 21 12:13:59.736: INFO: Updating DaemonSet daemon-set
Aug 21 12:13:59.737: INFO: Make sure DaemonSet rollback is complete
Aug 21 12:13:59.800: INFO: Wrong image for pod: daemon-set-7vvx5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 12:13:59.801: INFO: Pod daemon-set-7vvx5 is not available
Aug 21 12:13:59.811: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:14:00.818: INFO: Wrong image for pod: daemon-set-7vvx5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 12:14:00.818: INFO: Pod daemon-set-7vvx5 is not available
Aug 21 12:14:00.826: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:14:01.863: INFO: Wrong image for pod: daemon-set-7vvx5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 21 12:14:01.863: INFO: Pod daemon-set-7vvx5 is not available
Aug 21 12:14:01.878: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:14:02.820: INFO: Pod daemon-set-ffpk7 is not available
Aug 21 12:14:02.830: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5478, will wait for the garbage collector to delete the pods
Aug 21 12:14:02.906: INFO: Deleting DaemonSet.extensions daemon-set took: 8.72169ms
Aug 21 12:14:03.106: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.757798ms
Aug 21 12:14:05.917: INFO: Number of nodes with available pods: 0
Aug 21 12:14:05.917: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 12:14:05.921: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5478/daemonsets","resourceVersion":"2111610"},"items":null}

Aug 21 12:14:05.924: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5478/pods","resourceVersion":"2111610"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:14:05.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5478" for this suite.

• [SLOW TEST:26.564 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":54,"skipped":849,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:14:05.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8075.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8075.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8075.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8075.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8075.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8075.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 12:14:14.232: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:14.237: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:14.241: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:14.246: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:14.261: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:14.266: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:14.271: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:14.276: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:14.285: INFO: Lookups using dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local]

Aug 21 12:14:19.293: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:19.331: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:19.336: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:19.340: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:19.352: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:19.355: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:19.359: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:19.363: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:19.372: INFO: Lookups using dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local]

Aug 21 12:14:24.294: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:24.300: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:24.304: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:24.308: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:24.327: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:24.332: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:24.336: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:24.340: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:24.348: INFO: Lookups using dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local]

Aug 21 12:14:29.319: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:29.324: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:29.328: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:29.333: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:29.347: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:29.351: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:29.355: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:29.359: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:29.367: INFO: Lookups using dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local]

Aug 21 12:14:34.293: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:34.299: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:34.303: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:34.308: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:34.321: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:34.326: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:34.330: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:34.334: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:34.339: INFO: Lookups using dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local]

Aug 21 12:14:39.293: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:39.298: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:39.303: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:39.307: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:39.320: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:39.360: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:39.366: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:39.371: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local from pod dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed: the server could not find the requested resource (get pods dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed)
Aug 21 12:14:39.379: INFO: Lookups using dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8075.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8075.svc.cluster.local jessie_udp@dns-test-service-2.dns-8075.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8075.svc.cluster.local]

Aug 21 12:14:44.341: INFO: DNS probes using dns-8075/dns-test-f1bb47b9-769e-4431-ada3-9306e3b5caed succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:14:44.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8075" for this suite.

• [SLOW TEST:38.874 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":55,"skipped":850,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:14:44.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:14:58.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5504" for this suite.

• [SLOW TEST:13.752 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":56,"skipped":871,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:14:58.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 12:14:58.676: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 12:14:58.718: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 12:14:58.724: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 12:14:58.756: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 12:14:58.756: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 12:14:58.756: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 12:14:58.756: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 12:14:58.756: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 12:14:58.799: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 12:14:58.799: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 12:14:58.799: INFO: rally-232bed22-12bz9vnq from c-rally-232bed22-uuytn5dj started at 2020-08-21 12:12:58 +0000 UTC (1 container statuses recorded)
Aug 21 12:14:58.799: INFO: 	Container rally-232bed22-12bz9vnq ready: false, restart count 0
Aug 21 12:14:58.799: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 12:14:58.799: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-eb9ccc9f-0f0b-44b8-b4a5-6065f85c2981 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-eb9ccc9f-0f0b-44b8-b4a5-6065f85c2981 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-eb9ccc9f-0f0b-44b8-b4a5-6065f85c2981
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:15:09.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1047" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:10.809 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":57,"skipped":886,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:15:09.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:15:12.737: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 21 12:15:15.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608912, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608912, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608912, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733608912, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:15:18.163: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:15:18.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:15:19.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-409" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:10.200 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":58,"skipped":887,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:15:19.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-r5jz
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 12:15:21.702: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-r5jz" in namespace "subpath-4383" to be "Succeeded or Failed"
Aug 21 12:15:22.130: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Pending", Reason="", readiness=false. Elapsed: 428.024437ms
Aug 21 12:15:24.135: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.433008726s
Aug 21 12:15:26.298: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.596023607s
Aug 21 12:15:28.303: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.600752131s
Aug 21 12:15:30.311: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Running", Reason="", readiness=true. Elapsed: 8.608810888s
Aug 21 12:15:32.321: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Running", Reason="", readiness=true. Elapsed: 10.618161521s
Aug 21 12:15:34.329: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Running", Reason="", readiness=true. Elapsed: 12.626236907s
Aug 21 12:15:36.340: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Running", Reason="", readiness=true. Elapsed: 14.637729224s
Aug 21 12:15:38.346: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Running", Reason="", readiness=true. Elapsed: 16.643652149s
Aug 21 12:15:40.353: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Running", Reason="", readiness=true. Elapsed: 18.650462949s
Aug 21 12:15:42.763: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Running", Reason="", readiness=true. Elapsed: 21.06089225s
Aug 21 12:15:44.772: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Running", Reason="", readiness=true. Elapsed: 23.069657116s
Aug 21 12:15:46.779: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Running", Reason="", readiness=true. Elapsed: 25.076931852s
Aug 21 12:15:48.786: INFO: Pod "pod-subpath-test-configmap-r5jz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.083680652s
STEP: Saw pod success
Aug 21 12:15:48.786: INFO: Pod "pod-subpath-test-configmap-r5jz" satisfied condition "Succeeded or Failed"
Aug 21 12:15:48.792: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-r5jz container test-container-subpath-configmap-r5jz: 
STEP: delete the pod
Aug 21 12:15:48.832: INFO: Waiting for pod pod-subpath-test-configmap-r5jz to disappear
Aug 21 12:15:48.839: INFO: Pod pod-subpath-test-configmap-r5jz no longer exists
STEP: Deleting pod pod-subpath-test-configmap-r5jz
Aug 21 12:15:48.840: INFO: Deleting pod "pod-subpath-test-configmap-r5jz" in namespace "subpath-4383"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:15:48.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4383" for this suite.

• [SLOW TEST:29.283 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":59,"skipped":890,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:15:48.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Aug 21 12:15:48.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-5676 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 21 12:15:50.259: INFO: stderr: ""
Aug 21 12:15:50.259: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Aug 21 12:15:50.260: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 21 12:15:50.261: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5676" to be "running and ready, or succeeded"
Aug 21 12:15:50.267: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090222ms
Aug 21 12:15:52.275: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01395669s
Aug 21 12:15:54.282: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.021217846s
Aug 21 12:15:54.283: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 21 12:15:54.283: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 21 12:15:54.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5676'
Aug 21 12:15:55.549: INFO: stderr: ""
Aug 21 12:15:55.550: INFO: stdout: "I0821 12:15:52.772940       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/55p 415\nI0821 12:15:52.973059       1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/rq5 273\nI0821 12:15:53.173112       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/7nbm 319\nI0821 12:15:53.373136       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/g8x 223\nI0821 12:15:53.573117       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/vgl 267\nI0821 12:15:53.773092       1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/s7p 306\nI0821 12:15:53.973137       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/m8d 407\nI0821 12:15:54.173113       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/rkb 530\nI0821 12:15:54.373087       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/p8lp 339\nI0821 12:15:54.573051       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/mns 215\nI0821 12:15:54.773099       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/7jdr 476\nI0821 12:15:54.973053       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/srct 413\nI0821 12:15:55.173105       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/ltd 507\nI0821 12:15:55.373087       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/g8sf 238\n"
STEP: limiting log lines
Aug 21 12:15:55.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5676 --tail=1'
Aug 21 12:15:56.822: INFO: stderr: ""
Aug 21 12:15:56.823: INFO: stdout: "I0821 12:15:56.573095       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/hxkz 460\nI0821 12:15:56.773102       1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/4vz 502\n"
Aug 21 12:15:56.823: INFO: got output "I0821 12:15:56.573095       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/hxkz 460\nI0821 12:15:56.773102       1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/4vz 502\n"
Aug 21 12:15:56.831: FAIL: Expected
    : 2
to equal
    : 1

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.21.3()
	/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1329 +0x3f8
k8s.io/kubernetes/test/e2e.RunE2ETests(0x40030a7600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x308
k8s.io/kubernetes/test/e2e.TestE2E(0x40030a7600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x28
testing.tRunner(0x40030a7600, 0x4447430)
	/usr/local/go/src/testing/testing.go:909 +0xb8
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x2c0
[AfterEach] Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Aug 21 12:15:56.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5676'
Aug 21 12:16:09.092: INFO: stderr: ""
Aug 21 12:16:09.093: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "kubectl-5676".
STEP: Found 5 events.
Aug 21 12:16:09.118: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for logs-generator: {default-scheduler } Scheduled: Successfully assigned kubectl-5676/logs-generator to kali-worker2
Aug 21 12:16:09.118: INFO: At 2020-08-21 12:15:51 +0000 UTC - event for logs-generator: {kubelet kali-worker2} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12" already present on machine
Aug 21 12:16:09.118: INFO: At 2020-08-21 12:15:52 +0000 UTC - event for logs-generator: {kubelet kali-worker2} Created: Created container logs-generator
Aug 21 12:16:09.118: INFO: At 2020-08-21 12:15:52 +0000 UTC - event for logs-generator: {kubelet kali-worker2} Started: Started container logs-generator
Aug 21 12:16:09.118: INFO: At 2020-08-21 12:15:58 +0000 UTC - event for logs-generator: {kubelet kali-worker2} Killing: Stopping container logs-generator
Aug 21 12:16:09.124: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 21 12:16:09.125: INFO: 
Aug 21 12:16:09.134: INFO: 
Logging node info for node kali-control-plane
Aug 21 12:16:09.139: INFO: Node Info: &Node{ObjectMeta:{kali-control-plane   /api/v1/nodes/kali-control-plane aabfb797-39b7-4cdb-a600-9c37cf9f5f27 2112100 0 2020-08-15 09:39:46 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-08-15 09:39:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-08-15 09:40:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 48 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 12:15:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-21 12:15:24 +0000 UTC,LastTransitionTime:2020-08-15 09:39:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-21 12:15:24 +0000 UTC,LastTransitionTime:2020-08-15 09:39:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-21 12:15:24 +0000 UTC,LastTransitionTime:2020-08-15 09:39:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-21 12:15:24 +0000 UTC,LastTransitionTime:2020-08-15 09:40:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:kali-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:04bdd55b92ef4b87b98c1323984fd428,SystemUUID:98a7b883-5496-49b8-a15e-cf216c9b1f46,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0-rc.1-4-g43366250,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:146688265,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:133590102,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:132867355,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:113140489,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug 21 12:16:09.148: INFO: 
Logging kubelet events for node kali-control-plane
Aug 21 12:16:09.153: INFO: 
Logging pods the kubelet thinks is on node kali-control-plane
Aug 21 12:16:09.196: INFO: etcd-kali-control-plane started at 2020-08-15 09:39:51 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.196: INFO: 	Container etcd ready: true, restart count 0
Aug 21 12:16:09.196: INFO: local-path-provisioner-5b4b545c55-988r4 started at 2020-08-15 09:40:21 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.196: INFO: 	Container local-path-provisioner ready: true, restart count 0
Aug 21 12:16:09.196: INFO: coredns-66bff467f8-k8c2r started at 2020-08-15 09:40:25 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.196: INFO: 	Container coredns ready: true, restart count 0
Aug 21 12:16:09.196: INFO: coredns-66bff467f8-2567d started at 2020-08-15 09:40:21 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.196: INFO: 	Container coredns ready: true, restart count 0
Aug 21 12:16:09.196: INFO: kube-apiserver-kali-control-plane started at 2020-08-15 09:39:52 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.196: INFO: 	Container kube-apiserver ready: true, restart count 0
Aug 21 12:16:09.196: INFO: kube-controller-manager-kali-control-plane started at 2020-08-15 09:39:51 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.196: INFO: 	Container kube-controller-manager ready: true, restart count 3
Aug 21 12:16:09.196: INFO: kube-scheduler-kali-control-plane started at 2020-08-15 09:39:51 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.196: INFO: 	Container kube-scheduler ready: true, restart count 2
Aug 21 12:16:09.197: INFO: kube-proxy-2d447 started at 2020-08-15 09:40:06 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.197: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 12:16:09.197: INFO: kindnet-gblkw started at 2020-08-15 09:40:06 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.197: INFO: 	Container kindnet-cni ready: true, restart count 0
W0821 12:16:09.207203      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 12:16:09.357: INFO: 
Latency metrics for node kali-control-plane
Aug 21 12:16:09.358: INFO: 
Logging node info for node kali-worker
Aug 21 12:16:09.364: INFO: Node Info: &Node{ObjectMeta:{kali-worker   /api/v1/nodes/kali-worker 6fd72b37-3b23-44b0-a93c-5fe74f0cc459 2111945 0 2020-08-15 09:40:21 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-08-15 09:40:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-08-15 09:41:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 50 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-08-21 12:13:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-21 12:13:00 +0000 UTC,LastTransitionTime:2020-08-15 09:40:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-21 12:13:00 +0000 UTC,LastTransitionTime:2020-08-15 09:40:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-21 12:13:00 +0000 UTC,LastTransitionTime:2020-08-15 09:40:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-21 12:13:00 +0000 UTC,LastTransitionTime:2020-08-15 09:41:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:kali-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:15cd0f7658ab411a916bf7e39e541afc,SystemUUID:88b71f10-6ef7-42ff-85c4-67542ea9524d,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0-rc.1-4-g43366250,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:146688265,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:133590102,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:132867355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:113140489,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug 21 12:16:09.367: INFO: 
Logging kubelet events for node kali-worker
Aug 21 12:16:09.371: INFO: 
Logging pods the kubelet thinks is on node kali-worker
Aug 21 12:16:09.387: INFO: kindnet-kkxd5 started at 2020-08-15 09:40:28 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.387: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 12:16:09.387: INFO: kube-proxy-vn4t5 started at 2020-08-15 09:40:28 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.387: INFO: 	Container kube-proxy ready: true, restart count 0
W0821 12:16:09.396348      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 12:16:09.496: INFO: 
Latency metrics for node kali-worker
Aug 21 12:16:09.496: INFO: 
Logging node info for node kali-worker2
Aug 21 12:16:09.503: INFO: Node Info: &Node{ObjectMeta:{kali-worker2   /api/v1/nodes/kali-worker2 d307d336-a411-4e89-8554-a64ddf81b196 2111587 0 2020-08-15 09:40:21 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kali-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-08-15 09:40:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2020-08-15 09:41:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 49 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubelet Update v1 2020-08-21 12:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-21 12:14:02 +0000 UTC,LastTransitionTime:2020-08-15 09:40:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-21 12:14:02 +0000 UTC,LastTransitionTime:2020-08-15 09:40:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-21 12:14:02 +0000 UTC,LastTransitionTime:2020-08-15 09:40:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-21 12:14:02 +0000 UTC,LastTransitionTime:2020-08-15 09:41:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:kali-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0a968d387b254f869ddebe183c042d8a,SystemUUID:7a02c6c9-87c1-4f14-a421-6690546a5dda,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0-rc.1-4-g43366250,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:146688265,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:133590102,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:132867355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:113140489,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug 21 12:16:09.509: INFO: 
Logging kubelet events for node kali-worker2
Aug 21 12:16:09.514: INFO: 
Logging pods the kubelet thinks is on node kali-worker2
Aug 21 12:16:09.528: INFO: kindnet-qzfqb started at 2020-08-15 09:40:30 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.528: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 12:16:09.528: INFO: kube-proxy-c52ll started at 2020-08-15 09:40:30 +0000 UTC (0+1 container statuses recorded)
Aug 21 12:16:09.528: INFO: 	Container kube-proxy ready: true, restart count 0
W0821 12:16:09.538215      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 12:16:09.642: INFO: 
Latency metrics for node kali-worker2
Aug 21 12:16:09.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5676" for this suite.

• Failure [20.776 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance] [It]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

    Aug 21 12:15:56.831: Expected
        : 2
    to equal
        : 1

    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1329
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":59,"skipped":895,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:16:09.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6568
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-6568
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6568
Aug 21 12:16:09.900: INFO: Found 0 stateful pods, waiting for 1
Aug 21 12:16:19.910: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 21 12:16:19.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6568 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 12:16:21.398: INFO: stderr: "I0821 12:16:21.251871     432 log.go:172] (0x4000a36000) (0x4000aaa000) Create stream\nI0821 12:16:21.256943     432 log.go:172] (0x4000a36000) (0x4000aaa000) Stream added, broadcasting: 1\nI0821 12:16:21.271614     432 log.go:172] (0x4000a36000) Reply frame received for 1\nI0821 12:16:21.272792     432 log.go:172] (0x4000a36000) (0x4000815400) Create stream\nI0821 12:16:21.272932     432 log.go:172] (0x4000a36000) (0x4000815400) Stream added, broadcasting: 3\nI0821 12:16:21.274969     432 log.go:172] (0x4000a36000) Reply frame received for 3\nI0821 12:16:21.275481     432 log.go:172] (0x4000a36000) (0x40008154a0) Create stream\nI0821 12:16:21.275586     432 log.go:172] (0x4000a36000) (0x40008154a0) Stream added, broadcasting: 5\nI0821 12:16:21.277737     432 log.go:172] (0x4000a36000) Reply frame received for 5\nI0821 12:16:21.343092     432 log.go:172] (0x4000a36000) Data frame received for 5\nI0821 12:16:21.343371     432 log.go:172] (0x40008154a0) (5) Data frame handling\nI0821 12:16:21.344026     432 log.go:172] (0x40008154a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 12:16:21.377887     432 log.go:172] (0x4000a36000) Data frame received for 5\nI0821 12:16:21.378099     432 log.go:172] (0x4000a36000) Data frame received for 3\nI0821 12:16:21.378313     432 log.go:172] (0x4000815400) (3) Data frame handling\nI0821 12:16:21.378470     432 log.go:172] (0x4000815400) (3) Data frame sent\nI0821 12:16:21.378568     432 log.go:172] (0x4000a36000) Data frame received for 3\nI0821 12:16:21.378651     432 log.go:172] (0x4000815400) (3) Data frame handling\nI0821 12:16:21.378907     432 log.go:172] (0x40008154a0) (5) Data frame handling\nI0821 12:16:21.379691     432 log.go:172] (0x4000a36000) Data frame received for 1\nI0821 12:16:21.379840     432 log.go:172] (0x4000aaa000) (1) Data frame handling\nI0821 12:16:21.379978     432 log.go:172] (0x4000aaa000) (1) Data frame sent\nI0821 12:16:21.381687     432 log.go:172] (0x4000a36000) (0x4000aaa000) Stream removed, broadcasting: 1\nI0821 12:16:21.384344     432 log.go:172] (0x4000a36000) Go away received\nI0821 12:16:21.386866     432 log.go:172] (0x4000a36000) (0x4000aaa000) Stream removed, broadcasting: 1\nI0821 12:16:21.387428     432 log.go:172] (0x4000a36000) (0x4000815400) Stream removed, broadcasting: 3\nI0821 12:16:21.387719     432 log.go:172] (0x4000a36000) (0x40008154a0) Stream removed, broadcasting: 5\n"
Aug 21 12:16:21.399: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 12:16:21.400: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 12:16:21.408: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 21 12:16:31.417: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 12:16:31.418: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 12:16:31.461: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
Aug 21 12:16:31.462: INFO: ss-0  kali-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:16:31.463: INFO: 
Aug 21 12:16:31.463: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 21 12:16:32.471: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975874534s
Aug 21 12:16:33.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968082192s
Aug 21 12:16:34.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.856988359s
Aug 21 12:16:35.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.846957441s
Aug 21 12:16:36.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.839313787s
Aug 21 12:16:37.626: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.823073855s
Aug 21 12:16:38.637: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.812754547s
Aug 21 12:16:39.644: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.801279109s
Aug 21 12:16:40.655: INFO: Verifying statefulset ss doesn't scale past 3 for another 794.348577ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6568
Aug 21 12:16:41.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6568 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:16:43.132: INFO: stderr: "I0821 12:16:43.013298     456 log.go:172] (0x4000a4a0b0) (0x4000a22140) Create stream\nI0821 12:16:43.015709     456 log.go:172] (0x4000a4a0b0) (0x4000a22140) Stream added, broadcasting: 1\nI0821 12:16:43.028147     456 log.go:172] (0x4000a4a0b0) Reply frame received for 1\nI0821 12:16:43.029006     456 log.go:172] (0x4000a4a0b0) (0x4000a221e0) Create stream\nI0821 12:16:43.029088     456 log.go:172] (0x4000a4a0b0) (0x4000a221e0) Stream added, broadcasting: 3\nI0821 12:16:43.030883     456 log.go:172] (0x4000a4a0b0) Reply frame received for 3\nI0821 12:16:43.031251     456 log.go:172] (0x4000a4a0b0) (0x400080b4a0) Create stream\nI0821 12:16:43.031332     456 log.go:172] (0x4000a4a0b0) (0x400080b4a0) Stream added, broadcasting: 5\nI0821 12:16:43.032580     456 log.go:172] (0x4000a4a0b0) Reply frame received for 5\nI0821 12:16:43.110436     456 log.go:172] (0x4000a4a0b0) Data frame received for 3\nI0821 12:16:43.110674     456 log.go:172] (0x4000a4a0b0) Data frame received for 5\nI0821 12:16:43.110863     456 log.go:172] (0x4000a4a0b0) Data frame received for 1\nI0821 12:16:43.111045     456 log.go:172] (0x4000a22140) (1) Data frame handling\nI0821 12:16:43.111182     456 log.go:172] (0x4000a221e0) (3) Data frame handling\nI0821 12:16:43.111588     456 log.go:172] (0x400080b4a0) (5) Data frame handling\nI0821 12:16:43.112368     456 log.go:172] (0x4000a22140) (1) Data frame sent\nI0821 12:16:43.112862     456 log.go:172] (0x400080b4a0) (5) Data frame sent\nI0821 12:16:43.112949     456 log.go:172] (0x4000a4a0b0) Data frame received for 5\nI0821 12:16:43.113010     456 log.go:172] (0x400080b4a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 12:16:43.113475     456 log.go:172] (0x4000a221e0) (3) Data frame sent\nI0821 12:16:43.113656     456 log.go:172] (0x4000a4a0b0) Data frame received for 3\nI0821 12:16:43.113788     456 log.go:172] (0x4000a221e0) (3) Data frame handling\nI0821 12:16:43.115698     456 log.go:172] (0x4000a4a0b0) (0x4000a22140) Stream removed, broadcasting: 1\nI0821 12:16:43.118928     456 log.go:172] (0x4000a4a0b0) Go away received\nI0821 12:16:43.122099     456 log.go:172] (0x4000a4a0b0) (0x4000a22140) Stream removed, broadcasting: 1\nI0821 12:16:43.122369     456 log.go:172] (0x4000a4a0b0) (0x4000a221e0) Stream removed, broadcasting: 3\nI0821 12:16:43.122552     456 log.go:172] (0x4000a4a0b0) (0x400080b4a0) Stream removed, broadcasting: 5\n"
Aug 21 12:16:43.133: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 12:16:43.133: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 12:16:43.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6568 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:16:44.598: INFO: stderr: "I0821 12:16:44.479779     480 log.go:172] (0x4000a52f20) (0x4000710320) Create stream\nI0821 12:16:44.485077     480 log.go:172] (0x4000a52f20) (0x4000710320) Stream added, broadcasting: 1\nI0821 12:16:44.495056     480 log.go:172] (0x4000a52f20) Reply frame received for 1\nI0821 12:16:44.495619     480 log.go:172] (0x4000a52f20) (0x4000860000) Create stream\nI0821 12:16:44.495679     480 log.go:172] (0x4000a52f20) (0x4000860000) Stream added, broadcasting: 3\nI0821 12:16:44.496982     480 log.go:172] (0x4000a52f20) Reply frame received for 3\nI0821 12:16:44.497228     480 log.go:172] (0x4000a52f20) (0x4000514c80) Create stream\nI0821 12:16:44.497294     480 log.go:172] (0x4000a52f20) (0x4000514c80) Stream added, broadcasting: 5\nI0821 12:16:44.498988     480 log.go:172] (0x4000a52f20) Reply frame received for 5\nI0821 12:16:44.566377     480 log.go:172] (0x4000a52f20) Data frame received for 5\nI0821 12:16:44.566828     480 log.go:172] (0x4000a52f20) Data frame received for 3\nI0821 12:16:44.566961     480 log.go:172] (0x4000860000) (3) Data frame handling\nI0821 12:16:44.567269     480 log.go:172] (0x4000a52f20) Data frame received for 1\nI0821 12:16:44.567470     480 log.go:172] (0x4000710320) (1) Data frame handling\nI0821 12:16:44.567578     480 log.go:172] (0x4000514c80) (5) Data frame handling\nI0821 12:16:44.568238     480 log.go:172] (0x4000514c80) (5) Data frame sent\nI0821 12:16:44.568451     480 log.go:172] (0x4000710320) (1) Data frame sent\nI0821 12:16:44.568663     480 log.go:172] (0x4000860000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 12:16:44.569751     480 log.go:172] (0x4000a52f20) Data frame received for 3\nI0821 12:16:44.577166     480 log.go:172] (0x4000a52f20) Data frame received for 5\nI0821 12:16:44.577940     480 log.go:172] (0x4000a52f20) (0x4000710320) Stream removed, broadcasting: 1\nI0821 12:16:44.580504     480 log.go:172] (0x4000514c80) (5) Data frame handling\nI0821 12:16:44.580876     480 log.go:172] (0x4000860000) (3) Data frame handling\nI0821 12:16:44.582220     480 log.go:172] (0x4000a52f20) Go away received\nI0821 12:16:44.587901     480 log.go:172] (0x4000a52f20) (0x4000710320) Stream removed, broadcasting: 1\nI0821 12:16:44.588223     480 log.go:172] (0x4000a52f20) (0x4000860000) Stream removed, broadcasting: 3\nI0821 12:16:44.588399     480 log.go:172] (0x4000a52f20) (0x4000514c80) Stream removed, broadcasting: 5\n"
Aug 21 12:16:44.599: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 12:16:44.599: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 12:16:44.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6568 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:16:46.070: INFO: stderr: "I0821 12:16:45.940995     506 log.go:172] (0x4000ace000) (0x4000a82000) Create stream\nI0821 12:16:45.946332     506 log.go:172] (0x4000ace000) (0x4000a82000) Stream added, broadcasting: 1\nI0821 12:16:45.961713     506 log.go:172] (0x4000ace000) Reply frame received for 1\nI0821 12:16:45.963274     506 log.go:172] (0x4000ace000) (0x4000ac4000) Create stream\nI0821 12:16:45.963402     506 log.go:172] (0x4000ace000) (0x4000ac4000) Stream added, broadcasting: 3\nI0821 12:16:45.965180     506 log.go:172] (0x4000ace000) Reply frame received for 3\nI0821 12:16:45.965398     506 log.go:172] (0x4000ace000) (0x4000a820a0) Create stream\nI0821 12:16:45.965453     506 log.go:172] (0x4000ace000) (0x4000a820a0) Stream added, broadcasting: 5\nI0821 12:16:45.966928     506 log.go:172] (0x4000ace000) Reply frame received for 5\nI0821 12:16:46.050878     506 log.go:172] (0x4000ace000) Data frame received for 3\nI0821 12:16:46.051158     506 log.go:172] (0x4000ace000) Data frame received for 5\nI0821 12:16:46.051364     506 log.go:172] (0x4000a820a0) (5) Data frame handling\nI0821 12:16:46.051576     506 log.go:172] (0x4000ace000) Data frame received for 1\nI0821 12:16:46.051702     506 log.go:172] (0x4000a82000) (1) Data frame handling\nI0821 12:16:46.052147     506 log.go:172] (0x4000ac4000) (3) Data frame handling\nI0821 12:16:46.052297     506 log.go:172] (0x4000a82000) (1) Data frame sent\nI0821 12:16:46.052506     506 log.go:172] (0x4000a820a0) (5) Data frame sent\nI0821 12:16:46.052601     506 log.go:172] (0x4000ace000) Data frame received for 5\nI0821 12:16:46.052664     506 log.go:172] (0x4000a820a0) (5) Data frame handling\nI0821 12:16:46.053475     506 log.go:172] (0x4000ac4000) (3) Data frame sent\nI0821 12:16:46.053560     506 log.go:172] (0x4000ace000) Data frame received for 3\nI0821 12:16:46.053623     506 log.go:172] (0x4000ac4000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 12:16:46.056095     506 log.go:172] (0x4000ace000) (0x4000a82000) Stream removed, broadcasting: 1\nI0821 12:16:46.057675     506 log.go:172] (0x4000ace000) Go away received\nI0821 12:16:46.061636     506 log.go:172] (0x4000ace000) (0x4000a82000) Stream removed, broadcasting: 1\nI0821 12:16:46.061949     506 log.go:172] (0x4000ace000) (0x4000ac4000) Stream removed, broadcasting: 3\nI0821 12:16:46.062174     506 log.go:172] (0x4000ace000) (0x4000a820a0) Stream removed, broadcasting: 5\n"
Aug 21 12:16:46.071: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 12:16:46.071: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 12:16:46.078: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 12:16:46.078: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 12:16:46.078: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 21 12:16:46.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6568 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 12:16:47.529: INFO: stderr: "I0821 12:16:47.408381     529 log.go:172] (0x4000a82000) (0x4000aea000) Create stream\nI0821 12:16:47.410814     529 log.go:172] (0x4000a82000) (0x4000aea000) Stream added, broadcasting: 1\nI0821 12:16:47.423746     529 log.go:172] (0x4000a82000) Reply frame received for 1\nI0821 12:16:47.424633     529 log.go:172] (0x4000a82000) (0x40007fd180) Create stream\nI0821 12:16:47.424720     529 log.go:172] (0x4000a82000) (0x40007fd180) Stream added, broadcasting: 3\nI0821 12:16:47.426817     529 log.go:172] (0x4000a82000) Reply frame received for 3\nI0821 12:16:47.427414     529 log.go:172] (0x4000a82000) (0x4000718000) Create stream\nI0821 12:16:47.427543     529 log.go:172] (0x4000a82000) (0x4000718000) Stream added, broadcasting: 5\nI0821 12:16:47.429467     529 log.go:172] (0x4000a82000) Reply frame received for 5\nI0821 12:16:47.509277     529 log.go:172] (0x4000a82000) Data frame received for 5\nI0821 12:16:47.509671     529 log.go:172] (0x4000a82000) Data frame received for 1\nI0821 12:16:47.509846     529 log.go:172] (0x4000aea000) (1) Data frame handling\nI0821 12:16:47.510127     529 log.go:172] (0x4000a82000) Data frame received for 3\nI0821 12:16:47.510332     529 log.go:172] (0x40007fd180) (3) Data frame handling\nI0821 12:16:47.510539     529 log.go:172] (0x4000718000) (5) Data frame handling\nI0821 12:16:47.510997     529 log.go:172] (0x4000718000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 12:16:47.511881     529 log.go:172] (0x40007fd180) (3) Data frame sent\nI0821 12:16:47.512025     529 log.go:172] (0x4000a82000) Data frame received for 5\nI0821 12:16:47.512114     529 log.go:172] (0x4000a82000) Data frame received for 3\nI0821 12:16:47.512237     529 log.go:172] (0x40007fd180) (3) Data frame handling\nI0821 12:16:47.512502     529 log.go:172] (0x4000aea000) (1) Data frame sent\nI0821 12:16:47.512835     529 log.go:172] (0x4000718000) (5) Data frame handling\nI0821 12:16:47.514019     529 log.go:172] (0x4000a82000) (0x4000aea000) Stream removed, broadcasting: 1\nI0821 12:16:47.516910     529 log.go:172] (0x4000a82000) Go away received\nI0821 12:16:47.521957     529 log.go:172] (0x4000a82000) (0x4000aea000) Stream removed, broadcasting: 1\nI0821 12:16:47.522358     529 log.go:172] (0x4000a82000) (0x40007fd180) Stream removed, broadcasting: 3\nI0821 12:16:47.522630     529 log.go:172] (0x4000a82000) (0x4000718000) Stream removed, broadcasting: 5\n"
Aug 21 12:16:47.530: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 12:16:47.530: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 12:16:47.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6568 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 12:16:49.006: INFO: stderr: "I0821 12:16:48.872046     552 log.go:172] (0x4000ac8000) (0x400070c000) Create stream\nI0821 12:16:48.875226     552 log.go:172] (0x4000ac8000) (0x400070c000) Stream added, broadcasting: 1\nI0821 12:16:48.886128     552 log.go:172] (0x4000ac8000) Reply frame received for 1\nI0821 12:16:48.887149     552 log.go:172] (0x4000ac8000) (0x400070c0a0) Create stream\nI0821 12:16:48.887265     552 log.go:172] (0x4000ac8000) (0x400070c0a0) Stream added, broadcasting: 3\nI0821 12:16:48.888862     552 log.go:172] (0x4000ac8000) Reply frame received for 3\nI0821 12:16:48.889152     552 log.go:172] (0x4000ac8000) (0x4000776000) Create stream\nI0821 12:16:48.889220     552 log.go:172] (0x4000ac8000) (0x4000776000) Stream added, broadcasting: 5\nI0821 12:16:48.890521     552 log.go:172] (0x4000ac8000) Reply frame received for 5\nI0821 12:16:48.971092     552 log.go:172] (0x4000ac8000) Data frame received for 5\nI0821 12:16:48.971509     552 log.go:172] (0x4000776000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 12:16:48.972874     552 log.go:172] (0x4000776000) (5) Data frame sent\nI0821 12:16:48.987139     552 log.go:172] (0x4000ac8000) Data frame received for 3\nI0821 12:16:48.987232     552 log.go:172] (0x400070c0a0) (3) Data frame handling\nI0821 12:16:48.987357     552 log.go:172] (0x400070c0a0) (3) Data frame sent\nI0821 12:16:48.987463     552 log.go:172] (0x4000ac8000) Data frame received for 3\nI0821 12:16:48.987567     552 log.go:172] (0x4000ac8000) Data frame received for 5\nI0821 12:16:48.987712     552 log.go:172] (0x4000776000) (5) Data frame handling\nI0821 12:16:48.987967     552 log.go:172] (0x400070c0a0) (3) Data frame handling\nI0821 12:16:48.989811     552 log.go:172] (0x4000ac8000) Data frame received for 1\nI0821 12:16:48.989881     552 log.go:172] (0x400070c000) (1) Data frame handling\nI0821 12:16:48.989964     552 log.go:172] (0x400070c000) (1) Data frame sent\nI0821 12:16:48.991509     552 log.go:172] (0x4000ac8000) (0x400070c000) Stream removed, broadcasting: 1\nI0821 12:16:48.994236     552 log.go:172] (0x4000ac8000) Go away received\nI0821 12:16:48.998125     552 log.go:172] (0x4000ac8000) (0x400070c000) Stream removed, broadcasting: 1\nI0821 12:16:48.998660     552 log.go:172] (0x4000ac8000) (0x400070c0a0) Stream removed, broadcasting: 3\nI0821 12:16:48.999063     552 log.go:172] (0x4000ac8000) (0x4000776000) Stream removed, broadcasting: 5\n"
Aug 21 12:16:49.007: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 12:16:49.007: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 12:16:49.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6568 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 12:16:50.477: INFO: stderr: "I0821 12:16:50.350867     575 log.go:172] (0x4000b08b00) (0x4000a34000) Create stream\nI0821 12:16:50.354700     575 log.go:172] (0x4000b08b00) (0x4000a34000) Stream added, broadcasting: 1\nI0821 12:16:50.365519     575 log.go:172] (0x4000b08b00) Reply frame received for 1\nI0821 12:16:50.366048     575 log.go:172] (0x4000b08b00) (0x4000a340a0) Create stream\nI0821 12:16:50.366101     575 log.go:172] (0x4000b08b00) (0x4000a340a0) Stream added, broadcasting: 3\nI0821 12:16:50.367313     575 log.go:172] (0x4000b08b00) Reply frame received for 3\nI0821 12:16:50.367539     575 log.go:172] (0x4000b08b00) (0x4000a34140) Create stream\nI0821 12:16:50.367593     575 log.go:172] (0x4000b08b00) (0x4000a34140) Stream added, broadcasting: 5\nI0821 12:16:50.369327     575 log.go:172] (0x4000b08b00) Reply frame received for 5\nI0821 12:16:50.424439     575 log.go:172] (0x4000b08b00) Data frame received for 5\nI0821 12:16:50.424935     575 log.go:172] (0x4000a34140) (5) Data frame handling\nI0821 12:16:50.425799     575 log.go:172] (0x4000a34140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 12:16:50.454597     575 log.go:172] (0x4000b08b00) Data frame received for 3\nI0821 12:16:50.454878     575 log.go:172] (0x4000a340a0) (3) Data frame handling\nI0821 12:16:50.455038     575 log.go:172] (0x4000b08b00) Data frame received for 5\nI0821 12:16:50.455213     575 log.go:172] (0x4000a34140) (5) Data frame handling\nI0821 12:16:50.455421     575 log.go:172] (0x4000a340a0) (3) Data frame sent\nI0821 12:16:50.455544     575 log.go:172] (0x4000b08b00) Data frame received for 3\nI0821 12:16:50.455657     575 log.go:172] (0x4000a340a0) (3) Data frame handling\nI0821 12:16:50.456651     575 log.go:172] (0x4000b08b00) Data frame received for 1\nI0821 12:16:50.456891     575 log.go:172] (0x4000a34000) (1) Data frame handling\nI0821 12:16:50.457054     575 log.go:172] (0x4000a34000) (1) Data frame sent\nI0821 12:16:50.458946     575 log.go:172] (0x4000b08b00) (0x4000a34000) Stream removed, broadcasting: 1\nI0821 12:16:50.462884     575 log.go:172] (0x4000b08b00) Go away received\nI0821 12:16:50.465547     575 log.go:172] (0x4000b08b00) (0x4000a34000) Stream removed, broadcasting: 1\nI0821 12:16:50.465889     575 log.go:172] (0x4000b08b00) (0x4000a340a0) Stream removed, broadcasting: 3\nI0821 12:16:50.466129     575 log.go:172] (0x4000b08b00) (0x4000a34140) Stream removed, broadcasting: 5\n"
Aug 21 12:16:50.477: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 12:16:50.477: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 12:16:50.478: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 12:16:50.483: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 21 12:17:00.852: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 12:17:00.852: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 12:17:00.852: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 12:17:00.875: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 21 12:17:00.875: INFO: ss-0  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:17:00.876: INFO: ss-1  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:00.877: INFO: ss-2  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:00.877: INFO: 
Aug 21 12:17:00.877: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 12:17:01.887: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 21 12:17:01.887: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:17:01.888: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:01.888: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:01.888: INFO: 
Aug 21 12:17:01.888: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 12:17:02.958: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 21 12:17:02.958: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:17:02.959: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:02.959: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:02.959: INFO: 
Aug 21 12:17:02.959: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 12:17:04.025: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 21 12:17:04.025: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:17:04.026: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:04.026: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:04.026: INFO: 
Aug 21 12:17:04.026: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 12:17:05.037: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 21 12:17:05.037: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:17:05.038: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:05.038: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:05.038: INFO: 
Aug 21 12:17:05.039: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 12:17:06.051: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 21 12:17:06.052: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:17:06.052: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:06.053: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:06.053: INFO: 
Aug 21 12:17:06.053: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 12:17:07.062: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 21 12:17:07.062: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:17:07.062: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:07.063: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:07.063: INFO: 
Aug 21 12:17:07.063: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 12:17:08.073: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 21 12:17:08.074: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:17:08.074: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:08.074: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:08.074: INFO: 
Aug 21 12:17:08.074: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 12:17:09.087: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 21 12:17:09.087: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:09 +0000 UTC  }]
Aug 21 12:17:09.087: INFO: ss-1  kali-worker2  Pending  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:09.088: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 12:16:31 +0000 UTC  }]
Aug 21 12:17:09.088: INFO: 
Aug 21 12:17:09.088: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 12:17:10.094: INFO: Verifying statefulset ss doesn't scale past 0 for another 774.609573ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6568
Aug 21 12:17:11.101: INFO: Scaling statefulset ss to 0
Aug 21 12:17:11.120: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 21 12:17:11.124: INFO: Deleting all statefulset in ns statefulset-6568
Aug 21 12:17:11.129: INFO: Scaling statefulset ss to 0
Aug 21 12:17:11.142: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 12:17:11.146: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:17:11.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6568" for this suite.

• [SLOW TEST:61.538 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":60,"skipped":908,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:17:11.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0821 12:17:11.971972      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 12:17:11.973: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:17:11.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8128" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":61,"skipped":925,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:17:11.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Aug 21 12:17:12.624: INFO: created pod pod-service-account-defaultsa
Aug 21 12:17:12.624: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 21 12:17:12.633: INFO: created pod pod-service-account-mountsa
Aug 21 12:17:12.633: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 21 12:17:12.665: INFO: created pod pod-service-account-nomountsa
Aug 21 12:17:12.665: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 21 12:17:12.696: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 21 12:17:12.696: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 21 12:17:12.748: INFO: created pod pod-service-account-mountsa-mountspec
Aug 21 12:17:12.749: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 21 12:17:12.774: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 21 12:17:12.774: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 21 12:17:12.892: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 21 12:17:12.892: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 21 12:17:13.166: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 21 12:17:13.166: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 21 12:17:13.425: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 21 12:17:13.426: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:17:13.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3800" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":62,"skipped":931,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:17:14.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:17:16.219: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 21 12:17:20.007: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:17:21.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2529" for this suite.

• [SLOW TEST:8.716 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":63,"skipped":939,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:17:23.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:17:27.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5414" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":64,"skipped":939,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:17:28.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:17:30.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1931" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":65,"skipped":978,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:17:31.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-4194
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 12:17:31.255: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 21 12:17:31.700: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 12:17:33.712: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 12:17:35.706: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 12:17:37.708: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:17:39.708: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:17:41.709: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:17:43.708: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:17:45.708: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:17:47.706: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:17:49.707: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 21 12:17:49.761: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 12:17:51.808: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 21 12:17:55.999: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.48 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4194 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:17:55.999: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:17:56.070998      10 log.go:172] (0x4001f589a0) (0x4002f80e60) Create stream
I0821 12:17:56.071136      10 log.go:172] (0x4001f589a0) (0x4002f80e60) Stream added, broadcasting: 1
I0821 12:17:56.074761      10 log.go:172] (0x4001f589a0) Reply frame received for 1
I0821 12:17:56.074904      10 log.go:172] (0x4001f589a0) (0x4002a3a780) Create stream
I0821 12:17:56.074973      10 log.go:172] (0x4001f589a0) (0x4002a3a780) Stream added, broadcasting: 3
I0821 12:17:56.076046      10 log.go:172] (0x4001f589a0) Reply frame received for 3
I0821 12:17:56.076164      10 log.go:172] (0x4001f589a0) (0x4002f80fa0) Create stream
I0821 12:17:56.076228      10 log.go:172] (0x4001f589a0) (0x4002f80fa0) Stream added, broadcasting: 5
I0821 12:17:56.077469      10 log.go:172] (0x4001f589a0) Reply frame received for 5
I0821 12:17:57.147050      10 log.go:172] (0x4001f589a0) Data frame received for 5
I0821 12:17:57.147398      10 log.go:172] (0x4002f80fa0) (5) Data frame handling
I0821 12:17:57.147566      10 log.go:172] (0x4001f589a0) Data frame received for 3
I0821 12:17:57.147739      10 log.go:172] (0x4002a3a780) (3) Data frame handling
I0821 12:17:57.147928      10 log.go:172] (0x4002a3a780) (3) Data frame sent
I0821 12:17:57.148082      10 log.go:172] (0x4001f589a0) Data frame received for 3
I0821 12:17:57.148288      10 log.go:172] (0x4002a3a780) (3) Data frame handling
I0821 12:17:57.149590      10 log.go:172] (0x4001f589a0) Data frame received for 1
I0821 12:17:57.149749      10 log.go:172] (0x4002f80e60) (1) Data frame handling
I0821 12:17:57.149880      10 log.go:172] (0x4002f80e60) (1) Data frame sent
I0821 12:17:57.150001      10 log.go:172] (0x4001f589a0) (0x4002f80e60) Stream removed, broadcasting: 1
I0821 12:17:57.150152      10 log.go:172] (0x4001f589a0) Go away received
I0821 12:17:57.150733      10 log.go:172] (0x4001f589a0) (0x4002f80e60) Stream removed, broadcasting: 1
I0821 12:17:57.150944      10 log.go:172] (0x4001f589a0) (0x4002a3a780) Stream removed, broadcasting: 3
I0821 12:17:57.151062      10 log.go:172] (0x4001f589a0) (0x4002f80fa0) Stream removed, broadcasting: 5
Aug 21 12:17:57.151: INFO: Found all expected endpoints: [netserver-0]
Aug 21 12:17:57.158: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.140 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4194 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:17:57.159: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:17:57.214903      10 log.go:172] (0x40024d1ad0) (0x4002a56320) Create stream
I0821 12:17:57.215093      10 log.go:172] (0x40024d1ad0) (0x4002a56320) Stream added, broadcasting: 1
I0821 12:17:57.223504      10 log.go:172] (0x40024d1ad0) Reply frame received for 1
I0821 12:17:57.223768      10 log.go:172] (0x40024d1ad0) (0x400247fb80) Create stream
I0821 12:17:57.223921      10 log.go:172] (0x40024d1ad0) (0x400247fb80) Stream added, broadcasting: 3
I0821 12:17:57.225905      10 log.go:172] (0x40024d1ad0) Reply frame received for 3
I0821 12:17:57.226066      10 log.go:172] (0x40024d1ad0) (0x4002a3a820) Create stream
I0821 12:17:57.226147      10 log.go:172] (0x40024d1ad0) (0x4002a3a820) Stream added, broadcasting: 5
I0821 12:17:57.227605      10 log.go:172] (0x40024d1ad0) Reply frame received for 5
I0821 12:17:58.323192      10 log.go:172] (0x40024d1ad0) Data frame received for 5
I0821 12:17:58.323367      10 log.go:172] (0x4002a3a820) (5) Data frame handling
I0821 12:17:58.323460      10 log.go:172] (0x40024d1ad0) Data frame received for 3
I0821 12:17:58.323565      10 log.go:172] (0x400247fb80) (3) Data frame handling
I0821 12:17:58.323667      10 log.go:172] (0x400247fb80) (3) Data frame sent
I0821 12:17:58.323791      10 log.go:172] (0x40024d1ad0) Data frame received for 3
I0821 12:17:58.323886      10 log.go:172] (0x400247fb80) (3) Data frame handling
I0821 12:17:58.324846      10 log.go:172] (0x40024d1ad0) Data frame received for 1
I0821 12:17:58.324975      10 log.go:172] (0x4002a56320) (1) Data frame handling
I0821 12:17:58.325073      10 log.go:172] (0x4002a56320) (1) Data frame sent
I0821 12:17:58.325196      10 log.go:172] (0x40024d1ad0) (0x4002a56320) Stream removed, broadcasting: 1
I0821 12:17:58.325311      10 log.go:172] (0x40024d1ad0) Go away received
I0821 12:17:58.325756      10 log.go:172] (0x40024d1ad0) (0x4002a56320) Stream removed, broadcasting: 1
I0821 12:17:58.325904      10 log.go:172] (0x40024d1ad0) (0x400247fb80) Stream removed, broadcasting: 3
I0821 12:17:58.326049      10 log.go:172] (0x40024d1ad0) (0x4002a3a820) Stream removed, broadcasting: 5
Aug 21 12:17:58.326: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:17:58.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4194" for this suite.

• [SLOW TEST:27.155 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":986,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:17:58.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:18:03.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3696" for this suite.

• [SLOW TEST:5.178 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":67,"skipped":1009,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:18:03.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
Aug 21 12:18:03.683: INFO: Waiting up to 5m0s for pod "client-containers-507eaaca-16fb-4861-97f1-8eefbb9ece9d" in namespace "containers-6392" to be "Succeeded or Failed"
Aug 21 12:18:03.696: INFO: Pod "client-containers-507eaaca-16fb-4861-97f1-8eefbb9ece9d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.125954ms
Aug 21 12:18:05.703: INFO: Pod "client-containers-507eaaca-16fb-4861-97f1-8eefbb9ece9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020102466s
Aug 21 12:18:07.843: INFO: Pod "client-containers-507eaaca-16fb-4861-97f1-8eefbb9ece9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160527563s
Aug 21 12:18:09.850: INFO: Pod "client-containers-507eaaca-16fb-4861-97f1-8eefbb9ece9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167407201s
STEP: Saw pod success
Aug 21 12:18:09.851: INFO: Pod "client-containers-507eaaca-16fb-4861-97f1-8eefbb9ece9d" satisfied condition "Succeeded or Failed"
Aug 21 12:18:09.855: INFO: Trying to get logs from node kali-worker2 pod client-containers-507eaaca-16fb-4861-97f1-8eefbb9ece9d container test-container: 
STEP: delete the pod
Aug 21 12:18:09.930: INFO: Waiting for pod client-containers-507eaaca-16fb-4861-97f1-8eefbb9ece9d to disappear
Aug 21 12:18:09.940: INFO: Pod client-containers-507eaaca-16fb-4861-97f1-8eefbb9ece9d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:18:09.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6392" for this suite.

• [SLOW TEST:6.426 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1051,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:18:09.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-vtmdk in namespace proxy-3095
I0821 12:18:10.348912      10 runners.go:190] Created replication controller with name: proxy-service-vtmdk, namespace: proxy-3095, replica count: 1
I0821 12:18:11.400419      10 runners.go:190] proxy-service-vtmdk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 12:18:12.401213      10 runners.go:190] proxy-service-vtmdk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 12:18:13.402096      10 runners.go:190] proxy-service-vtmdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 12:18:14.402773      10 runners.go:190] proxy-service-vtmdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 12:18:15.403497      10 runners.go:190] proxy-service-vtmdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0821 12:18:16.404189      10 runners.go:190] proxy-service-vtmdk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 12:18:16.417: INFO: setup took 6.109990021s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 21 12:18:16.428: INFO: (0) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 9.602529ms)
Aug 21 12:18:16.428: INFO: (0) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 10.072571ms)
Aug 21 12:18:16.428: INFO: (0) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 9.616134ms)
Aug 21 12:18:16.428: INFO: (0) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 9.889208ms)
Aug 21 12:18:16.431: INFO: (0) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 13.23618ms)
Aug 21 12:18:16.432: INFO: (0) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 14.711362ms)
Aug 21 12:18:16.433: INFO: (0) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 14.833424ms)
Aug 21 12:18:16.433: INFO: (0) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 13.750952ms)
Aug 21 12:18:16.433: INFO: (0) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 13.958881ms)
Aug 21 12:18:16.433: INFO: (0) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 14.729646ms)
Aug 21 12:18:16.433: INFO: (0) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 15.868618ms)
Aug 21 12:18:16.434: INFO: (0) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 15.604724ms)
Aug 21 12:18:16.435: INFO: (0) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 16.281653ms)
Aug 21 12:18:16.435: INFO: (0) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: ... (200; 6.528915ms)
Aug 21 12:18:16.444: INFO: (1) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 6.617217ms)
Aug 21 12:18:16.444: INFO: (1) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 6.946067ms)
Aug 21 12:18:16.444: INFO: (1) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 7.289883ms)
Aug 21 12:18:16.444: INFO: (1) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 7.233197ms)
Aug 21 12:18:16.444: INFO: (1) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 7.631352ms)
Aug 21 12:18:16.444: INFO: (1) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 7.536453ms)
Aug 21 12:18:16.445: INFO: (1) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 7.825432ms)
Aug 21 12:18:16.445: INFO: (1) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test<... (200; 7.811845ms)
Aug 21 12:18:16.446: INFO: (1) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 8.71009ms)
Aug 21 12:18:16.446: INFO: (1) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 9.116215ms)
Aug 21 12:18:16.455: INFO: (2) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 8.154966ms)
Aug 21 12:18:16.455: INFO: (2) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 8.206273ms)
Aug 21 12:18:16.455: INFO: (2) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 8.002244ms)
Aug 21 12:18:16.455: INFO: (2) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 8.029487ms)
Aug 21 12:18:16.455: INFO: (2) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 8.100463ms)
Aug 21 12:18:16.455: INFO: (2) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 8.34118ms)
Aug 21 12:18:16.455: INFO: (2) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 8.520943ms)
Aug 21 12:18:16.455: INFO: (2) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: ... (200; 12.29912ms)
Aug 21 12:18:16.469: INFO: (3) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 12.677983ms)
Aug 21 12:18:16.470: INFO: (3) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 12.886942ms)
Aug 21 12:18:16.470: INFO: (3) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 12.896021ms)
Aug 21 12:18:16.470: INFO: (3) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 13.012304ms)
Aug 21 12:18:16.470: INFO: (3) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 13.002483ms)
Aug 21 12:18:16.470: INFO: (3) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 13.515808ms)
Aug 21 12:18:16.470: INFO: (3) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test (200; 13.979406ms)
Aug 21 12:18:16.471: INFO: (3) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 14.394146ms)
Aug 21 12:18:16.471: INFO: (3) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 14.333148ms)
Aug 21 12:18:16.471: INFO: (3) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 14.460133ms)
Aug 21 12:18:16.478: INFO: (4) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 6.358222ms)
Aug 21 12:18:16.479: INFO: (4) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 6.905299ms)
Aug 21 12:18:16.479: INFO: (4) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 7.050742ms)
Aug 21 12:18:16.479: INFO: (4) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 7.243201ms)
Aug 21 12:18:16.479: INFO: (4) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 7.439009ms)
Aug 21 12:18:16.479: INFO: (4) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: ... (200; 8.201182ms)
Aug 21 12:18:16.482: INFO: (4) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 10.109219ms)
Aug 21 12:18:16.482: INFO: (4) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 10.344481ms)
Aug 21 12:18:16.483: INFO: (4) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 10.488812ms)
Aug 21 12:18:16.484: INFO: (4) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 11.442803ms)
Aug 21 12:18:16.484: INFO: (4) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 11.705729ms)
Aug 21 12:18:16.492: INFO: (5) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 7.471962ms)
Aug 21 12:18:16.493: INFO: (5) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 8.535669ms)
Aug 21 12:18:16.495: INFO: (5) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 11.354637ms)
Aug 21 12:18:16.496: INFO: (5) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 11.555783ms)
Aug 21 12:18:16.496: INFO: (5) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 11.914348ms)
Aug 21 12:18:16.496: INFO: (5) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 12.402164ms)
Aug 21 12:18:16.496: INFO: (5) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 12.02642ms)
Aug 21 12:18:16.497: INFO: (5) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 12.346552ms)
Aug 21 12:18:16.497: INFO: (5) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 12.372607ms)
Aug 21 12:18:16.497: INFO: (5) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 12.816188ms)
Aug 21 12:18:16.497: INFO: (5) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 13.327162ms)
Aug 21 12:18:16.498: INFO: (5) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 13.458001ms)
Aug 21 12:18:16.498: INFO: (5) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 13.598628ms)
Aug 21 12:18:16.498: INFO: (5) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 13.432521ms)
Aug 21 12:18:16.498: INFO: (5) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test (200; 8.772273ms)
Aug 21 12:18:16.508: INFO: (6) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 9.387311ms)
Aug 21 12:18:16.508: INFO: (6) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 8.995605ms)
Aug 21 12:18:16.509: INFO: (6) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 8.960367ms)
Aug 21 12:18:16.509: INFO: (6) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 9.558298ms)
Aug 21 12:18:16.509: INFO: (6) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 9.562342ms)
Aug 21 12:18:16.509: INFO: (6) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: ... (200; 9.677359ms)
Aug 21 12:18:16.509: INFO: (6) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 10.08217ms)
Aug 21 12:18:16.509: INFO: (6) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 10.326258ms)
Aug 21 12:18:16.513: INFO: (7) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 3.68871ms)
Aug 21 12:18:16.524: INFO: (7) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 14.27996ms)
Aug 21 12:18:16.526: INFO: (7) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 15.908119ms)
Aug 21 12:18:16.526: INFO: (7) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 16.081707ms)
Aug 21 12:18:16.526: INFO: (7) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 16.824073ms)
Aug 21 12:18:16.527: INFO: (7) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 16.773831ms)
Aug 21 12:18:16.527: INFO: (7) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 16.81885ms)
Aug 21 12:18:16.527: INFO: (7) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 17.628105ms)
Aug 21 12:18:16.528: INFO: (7) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test (200; 17.744574ms)
Aug 21 12:18:16.528: INFO: (7) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 18.058809ms)
Aug 21 12:18:16.529: INFO: (7) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 18.96456ms)
Aug 21 12:18:16.529: INFO: (7) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 18.447512ms)
Aug 21 12:18:16.529: INFO: (7) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 19.266541ms)
Aug 21 12:18:16.529: INFO: (7) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 19.330076ms)
Aug 21 12:18:16.529: INFO: (7) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 19.053233ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 8.357863ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 8.551186ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 7.842504ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 7.80823ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 8.252576ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 8.128274ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 9.166328ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 8.741526ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 8.372318ms)
Aug 21 12:18:16.538: INFO: (8) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 9.142843ms)
Aug 21 12:18:16.539: INFO: (8) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 8.423891ms)
Aug 21 12:18:16.539: INFO: (8) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 9.454088ms)
Aug 21 12:18:16.539: INFO: (8) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: ... (200; 5.941692ms)
Aug 21 12:18:16.546: INFO: (9) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 6.383053ms)
Aug 21 12:18:16.546: INFO: (9) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 6.670474ms)
Aug 21 12:18:16.546: INFO: (9) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 6.4578ms)
Aug 21 12:18:16.546: INFO: (9) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 6.393926ms)
Aug 21 12:18:16.547: INFO: (9) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test<... (200; 7.222374ms)
Aug 21 12:18:16.547: INFO: (9) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 7.011336ms)
Aug 21 12:18:16.547: INFO: (9) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 7.043816ms)
Aug 21 12:18:16.548: INFO: (9) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 8.212202ms)
Aug 21 12:18:16.548: INFO: (9) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 8.375741ms)
Aug 21 12:18:16.553: INFO: (10) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 4.482417ms)
Aug 21 12:18:16.553: INFO: (10) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 3.749932ms)
Aug 21 12:18:16.554: INFO: (10) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 4.918345ms)
Aug 21 12:18:16.554: INFO: (10) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 5.21634ms)
Aug 21 12:18:16.555: INFO: (10) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 5.711968ms)
Aug 21 12:18:16.555: INFO: (10) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 6.061832ms)
Aug 21 12:18:16.555: INFO: (10) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 5.973472ms)
Aug 21 12:18:16.555: INFO: (10) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 6.140507ms)
Aug 21 12:18:16.555: INFO: (10) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 6.247122ms)
Aug 21 12:18:16.555: INFO: (10) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 6.113859ms)
Aug 21 12:18:16.555: INFO: (10) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 6.363131ms)
Aug 21 12:18:16.555: INFO: (10) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 6.85232ms)
Aug 21 12:18:16.555: INFO: (10) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 6.554648ms)
Aug 21 12:18:16.556: INFO: (10) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 7.106574ms)
Aug 21 12:18:16.556: INFO: (10) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 7.19336ms)
Aug 21 12:18:16.556: INFO: (10) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test<... (200; 5.133906ms)
Aug 21 12:18:16.561: INFO: (11) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 5.267159ms)
Aug 21 12:18:16.561: INFO: (11) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 5.36069ms)
Aug 21 12:18:16.562: INFO: (11) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 5.471053ms)
Aug 21 12:18:16.562: INFO: (11) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 5.596012ms)
Aug 21 12:18:16.562: INFO: (11) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 5.862662ms)
Aug 21 12:18:16.562: INFO: (11) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 5.944355ms)
Aug 21 12:18:16.563: INFO: (11) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 6.389729ms)
Aug 21 12:18:16.563: INFO: (11) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 6.605389ms)
Aug 21 12:18:16.563: INFO: (11) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 6.769572ms)
Aug 21 12:18:16.563: INFO: (11) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 6.825559ms)
Aug 21 12:18:16.568: INFO: (12) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 4.147441ms)
Aug 21 12:18:16.568: INFO: (12) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test (200; 7.981173ms)
Aug 21 12:18:16.572: INFO: (12) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 8.212007ms)
Aug 21 12:18:16.572: INFO: (12) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 8.0701ms)
Aug 21 12:18:16.572: INFO: (12) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 8.406791ms)
Aug 21 12:18:16.572: INFO: (12) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 8.460614ms)
Aug 21 12:18:16.572: INFO: (12) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 8.565076ms)
Aug 21 12:18:16.573: INFO: (12) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 8.968269ms)
Aug 21 12:18:16.573: INFO: (12) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 9.152956ms)
Aug 21 12:18:16.573: INFO: (12) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 9.156942ms)
Aug 21 12:18:16.573: INFO: (12) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 9.278938ms)
Aug 21 12:18:16.573: INFO: (12) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 9.186031ms)
Aug 21 12:18:16.576: INFO: (13) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: ... (200; 4.569525ms)
Aug 21 12:18:16.578: INFO: (13) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 4.631758ms)
Aug 21 12:18:16.578: INFO: (13) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 4.812273ms)
Aug 21 12:18:16.578: INFO: (13) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 4.810207ms)
Aug 21 12:18:16.578: INFO: (13) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 4.880299ms)
Aug 21 12:18:16.578: INFO: (13) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 4.986265ms)
Aug 21 12:18:16.578: INFO: (13) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 5.384242ms)
Aug 21 12:18:16.579: INFO: (13) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 5.601605ms)
Aug 21 12:18:16.579: INFO: (13) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 5.843963ms)
Aug 21 12:18:16.579: INFO: (13) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 5.764725ms)
Aug 21 12:18:16.582: INFO: (13) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 8.446639ms)
Aug 21 12:18:16.582: INFO: (13) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 8.989347ms)
Aug 21 12:18:16.582: INFO: (13) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 8.817157ms)
Aug 21 12:18:16.582: INFO: (13) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 9.175844ms)
Aug 21 12:18:16.582: INFO: (13) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 9.461167ms)
Aug 21 12:18:16.587: INFO: (14) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 4.620505ms)
Aug 21 12:18:16.588: INFO: (14) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 5.105762ms)
Aug 21 12:18:16.588: INFO: (14) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 4.884378ms)
Aug 21 12:18:16.588: INFO: (14) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 5.211214ms)
Aug 21 12:18:16.588: INFO: (14) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 4.873288ms)
Aug 21 12:18:16.589: INFO: (14) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 6.085679ms)
Aug 21 12:18:16.589: INFO: (14) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 6.509426ms)
Aug 21 12:18:16.589: INFO: (14) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 6.692331ms)
Aug 21 12:18:16.589: INFO: (14) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 6.762993ms)
Aug 21 12:18:16.589: INFO: (14) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 6.558614ms)
Aug 21 12:18:16.590: INFO: (14) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 6.830889ms)
Aug 21 12:18:16.590: INFO: (14) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 7.044908ms)
Aug 21 12:18:16.590: INFO: (14) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 7.081388ms)
Aug 21 12:18:16.590: INFO: (14) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 7.324545ms)
Aug 21 12:18:16.590: INFO: (14) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 7.479861ms)
Aug 21 12:18:16.591: INFO: (14) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test<... (200; 3.669062ms)
Aug 21 12:18:16.595: INFO: (15) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test (200; 5.435258ms)
Aug 21 12:18:16.597: INFO: (15) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 5.817744ms)
Aug 21 12:18:16.597: INFO: (15) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 6.136482ms)
Aug 21 12:18:16.597: INFO: (15) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 6.017384ms)
Aug 21 12:18:16.597: INFO: (15) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 6.133045ms)
Aug 21 12:18:16.597: INFO: (15) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 6.284563ms)
Aug 21 12:18:16.597: INFO: (15) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 6.438676ms)
Aug 21 12:18:16.602: INFO: (16) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: ... (200; 4.151651ms)
Aug 21 12:18:16.602: INFO: (16) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 4.312729ms)
Aug 21 12:18:16.602: INFO: (16) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 4.322049ms)
Aug 21 12:18:16.602: INFO: (16) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 4.26151ms)
Aug 21 12:18:16.603: INFO: (16) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 4.974995ms)
Aug 21 12:18:16.603: INFO: (16) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 5.108729ms)
Aug 21 12:18:16.603: INFO: (16) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 5.171438ms)
Aug 21 12:18:16.603: INFO: (16) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 5.270563ms)
Aug 21 12:18:16.603: INFO: (16) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 5.2951ms)
Aug 21 12:18:16.603: INFO: (16) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 5.491408ms)
Aug 21 12:18:16.603: INFO: (16) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 5.574327ms)
Aug 21 12:18:16.604: INFO: (16) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 6.591842ms)
Aug 21 12:18:16.604: INFO: (16) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 6.599919ms)
Aug 21 12:18:16.604: INFO: (16) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 6.610704ms)
Aug 21 12:18:16.604: INFO: (16) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 6.715865ms)
Aug 21 12:18:16.610: INFO: (17) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 5.321334ms)
Aug 21 12:18:16.610: INFO: (17) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: ... (200; 5.896414ms)
Aug 21 12:18:16.611: INFO: (17) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 6.285916ms)
Aug 21 12:18:16.611: INFO: (17) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 5.929025ms)
Aug 21 12:18:16.611: INFO: (17) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 6.391779ms)
Aug 21 12:18:16.612: INFO: (17) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 7.138262ms)
Aug 21 12:18:16.612: INFO: (17) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 6.925587ms)
Aug 21 12:18:16.612: INFO: (17) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 6.992524ms)
Aug 21 12:18:16.612: INFO: (17) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 7.076948ms)
Aug 21 12:18:16.612: INFO: (17) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 7.317538ms)
Aug 21 12:18:16.612: INFO: (17) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 7.417938ms)
Aug 21 12:18:16.612: INFO: (17) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 7.216378ms)
Aug 21 12:18:16.612: INFO: (17) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 7.407029ms)
Aug 21 12:18:16.616: INFO: (18) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29/proxy/: test (200; 3.632859ms)
Aug 21 12:18:16.617: INFO: (18) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 4.02487ms)
Aug 21 12:18:16.617: INFO: (18) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 4.029874ms)
Aug 21 12:18:16.617: INFO: (18) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 4.05584ms)
Aug 21 12:18:16.617: INFO: (18) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 4.937973ms)
Aug 21 12:18:16.618: INFO: (18) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 5.078371ms)
Aug 21 12:18:16.618: INFO: (18) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 4.846322ms)
Aug 21 12:18:16.618: INFO: (18) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: ... (200; 6.038351ms)
Aug 21 12:18:16.619: INFO: (18) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 6.125827ms)
Aug 21 12:18:16.622: INFO: (19) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:443/proxy/: test (200; 4.768535ms)
Aug 21 12:18:16.624: INFO: (19) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:1080/proxy/: ... (200; 4.916967ms)
Aug 21 12:18:16.625: INFO: (19) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname1/proxy/: tls baz (200; 5.306223ms)
Aug 21 12:18:16.625: INFO: (19) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname2/proxy/: bar (200; 5.88809ms)
Aug 21 12:18:16.625: INFO: (19) /api/v1/namespaces/proxy-3095/services/http:proxy-service-vtmdk:portname1/proxy/: foo (200; 6.035822ms)
Aug 21 12:18:16.625: INFO: (19) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:462/proxy/: tls qux (200; 6.041214ms)
Aug 21 12:18:16.625: INFO: (19) /api/v1/namespaces/proxy-3095/pods/http:proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 6.037999ms)
Aug 21 12:18:16.625: INFO: (19) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:162/proxy/: bar (200; 6.152505ms)
Aug 21 12:18:16.625: INFO: (19) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname1/proxy/: foo (200; 6.073472ms)
Aug 21 12:18:16.626: INFO: (19) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:160/proxy/: foo (200; 6.448689ms)
Aug 21 12:18:16.626: INFO: (19) /api/v1/namespaces/proxy-3095/services/proxy-service-vtmdk:portname2/proxy/: bar (200; 6.310616ms)
Aug 21 12:18:16.626: INFO: (19) /api/v1/namespaces/proxy-3095/pods/proxy-service-vtmdk-tdq29:1080/proxy/: test<... (200; 6.504218ms)
Aug 21 12:18:16.626: INFO: (19) /api/v1/namespaces/proxy-3095/pods/https:proxy-service-vtmdk-tdq29:460/proxy/: tls baz (200; 6.674113ms)
Aug 21 12:18:16.626: INFO: (19) /api/v1/namespaces/proxy-3095/services/https:proxy-service-vtmdk:tlsportname2/proxy/: tls qux (200; 6.485682ms)
STEP: deleting ReplicationController proxy-service-vtmdk in namespace proxy-3095, will wait for the garbage collector to delete the pods
Aug 21 12:18:16.687: INFO: Deleting ReplicationController proxy-service-vtmdk took: 6.889251ms
Aug 21 12:18:16.788: INFO: Terminating ReplicationController proxy-service-vtmdk pods took: 100.863709ms
[AfterEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:18:19.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3095" for this suite.

• [SLOW TEST:9.550 seconds]
[sig-network] Proxy
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":69,"skipped":1062,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:18:19.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8472
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-8472
I0821 12:18:20.781875      10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8472, replica count: 2
I0821 12:18:23.833281      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 12:18:26.834038      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 12:18:26.834: INFO: Creating new exec pod
Aug 21 12:18:32.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-8472 execpod4q6m9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 21 12:18:33.491: INFO: stderr: "I0821 12:18:33.342641     598 log.go:172] (0x4000708000) (0x400071c000) Create stream\nI0821 12:18:33.345430     598 log.go:172] (0x4000708000) (0x400071c000) Stream added, broadcasting: 1\nI0821 12:18:33.355909     598 log.go:172] (0x4000708000) Reply frame received for 1\nI0821 12:18:33.356543     598 log.go:172] (0x4000708000) (0x40008274a0) Create stream\nI0821 12:18:33.356616     598 log.go:172] (0x4000708000) (0x40008274a0) Stream added, broadcasting: 3\nI0821 12:18:33.357913     598 log.go:172] (0x4000708000) Reply frame received for 3\nI0821 12:18:33.358230     598 log.go:172] (0x4000708000) (0x400071c0a0) Create stream\nI0821 12:18:33.358303     598 log.go:172] (0x4000708000) (0x400071c0a0) Stream added, broadcasting: 5\nI0821 12:18:33.359531     598 log.go:172] (0x4000708000) Reply frame received for 5\nI0821 12:18:33.465563     598 log.go:172] (0x4000708000) Data frame received for 3\nI0821 12:18:33.465922     598 log.go:172] (0x40008274a0) (3) Data frame handling\nI0821 12:18:33.466770     598 log.go:172] (0x4000708000) Data frame received for 1\nI0821 12:18:33.466894     598 log.go:172] (0x400071c000) (1) Data frame handling\nI0821 12:18:33.467257     598 log.go:172] (0x4000708000) Data frame received for 5\nI0821 12:18:33.467353     598 log.go:172] (0x400071c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0821 12:18:33.468857     598 log.go:172] (0x400071c000) (1) Data frame sent\nI0821 12:18:33.469144     598 log.go:172] (0x400071c0a0) (5) Data frame sent\nI0821 12:18:33.469388     598 log.go:172] (0x4000708000) Data frame received for 5\nI0821 12:18:33.469494     598 log.go:172] (0x400071c0a0) (5) Data frame handling\nI0821 12:18:33.470231     598 log.go:172] (0x4000708000) (0x400071c000) Stream removed, broadcasting: 1\nI0821 12:18:33.474143     598 log.go:172] (0x4000708000) Go away received\nI0821 12:18:33.478083     598 log.go:172] (0x4000708000) (0x400071c000) Stream removed, broadcasting: 1\nI0821 12:18:33.478584     598 log.go:172] (0x4000708000) (0x40008274a0) Stream removed, broadcasting: 3\nI0821 12:18:33.478905     598 log.go:172] (0x4000708000) (0x400071c0a0) Stream removed, broadcasting: 5\n"
Aug 21 12:18:33.491: INFO: stdout: ""
Aug 21 12:18:33.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-8472 execpod4q6m9 -- /bin/sh -x -c nc -zv -t -w 2 10.110.96.9 80'
Aug 21 12:18:34.994: INFO: stderr: "I0821 12:18:34.858410     621 log.go:172] (0x4000ac40b0) (0x40009d8140) Create stream\nI0821 12:18:34.863092     621 log.go:172] (0x4000ac40b0) (0x40009d8140) Stream added, broadcasting: 1\nI0821 12:18:34.878790     621 log.go:172] (0x4000ac40b0) Reply frame received for 1\nI0821 12:18:34.880130     621 log.go:172] (0x4000ac40b0) (0x4000829400) Create stream\nI0821 12:18:34.880246     621 log.go:172] (0x4000ac40b0) (0x4000829400) Stream added, broadcasting: 3\nI0821 12:18:34.881984     621 log.go:172] (0x4000ac40b0) Reply frame received for 3\nI0821 12:18:34.882431     621 log.go:172] (0x4000ac40b0) (0x40009d8280) Create stream\nI0821 12:18:34.882512     621 log.go:172] (0x4000ac40b0) (0x40009d8280) Stream added, broadcasting: 5\nI0821 12:18:34.884023     621 log.go:172] (0x4000ac40b0) Reply frame received for 5\nI0821 12:18:34.969898     621 log.go:172] (0x4000ac40b0) Data frame received for 5\nI0821 12:18:34.970249     621 log.go:172] (0x4000ac40b0) Data frame received for 3\nI0821 12:18:34.970457     621 log.go:172] (0x4000829400) (3) Data frame handling\nI0821 12:18:34.970791     621 log.go:172] (0x4000ac40b0) Data frame received for 1\nI0821 12:18:34.970932     621 log.go:172] (0x40009d8140) (1) Data frame handling\nI0821 12:18:34.971023     621 log.go:172] (0x40009d8280) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.96.9 80\nConnection to 10.110.96.9 80 port [tcp/http] succeeded!\nI0821 12:18:34.973517     621 log.go:172] (0x40009d8280) (5) Data frame sent\nI0821 12:18:34.973711     621 log.go:172] (0x40009d8140) (1) Data frame sent\nI0821 12:18:34.973820     621 log.go:172] (0x4000ac40b0) Data frame received for 5\nI0821 12:18:34.973904     621 log.go:172] (0x40009d8280) (5) Data frame handling\nI0821 12:18:34.974723     621 log.go:172] (0x4000ac40b0) (0x40009d8140) Stream removed, broadcasting: 1\nI0821 12:18:34.978012     621 log.go:172] (0x4000ac40b0) Go away received\nI0821 12:18:34.980916     621 log.go:172] (0x4000ac40b0) (0x40009d8140) Stream removed, broadcasting: 1\nI0821 12:18:34.981466     621 log.go:172] (0x4000ac40b0) (0x4000829400) Stream removed, broadcasting: 3\nI0821 12:18:34.981801     621 log.go:172] (0x4000ac40b0) (0x40009d8280) Stream removed, broadcasting: 5\n"
Aug 21 12:18:34.995: INFO: stdout: ""
Aug 21 12:18:34.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-8472 execpod4q6m9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 31523'
Aug 21 12:18:36.474: INFO: stderr: "I0821 12:18:36.356003     643 log.go:172] (0x400003a4d0) (0x40007f52c0) Create stream\nI0821 12:18:36.361786     643 log.go:172] (0x400003a4d0) (0x40007f52c0) Stream added, broadcasting: 1\nI0821 12:18:36.376874     643 log.go:172] (0x400003a4d0) Reply frame received for 1\nI0821 12:18:36.377580     643 log.go:172] (0x400003a4d0) (0x4000756000) Create stream\nI0821 12:18:36.377658     643 log.go:172] (0x400003a4d0) (0x4000756000) Stream added, broadcasting: 3\nI0821 12:18:36.379301     643 log.go:172] (0x400003a4d0) Reply frame received for 3\nI0821 12:18:36.379620     643 log.go:172] (0x400003a4d0) (0x40007f54a0) Create stream\nI0821 12:18:36.379711     643 log.go:172] (0x400003a4d0) (0x40007f54a0) Stream added, broadcasting: 5\nI0821 12:18:36.381218     643 log.go:172] (0x400003a4d0) Reply frame received for 5\nI0821 12:18:36.449393     643 log.go:172] (0x400003a4d0) Data frame received for 3\nI0821 12:18:36.450119     643 log.go:172] (0x400003a4d0) Data frame received for 1\nI0821 12:18:36.450310     643 log.go:172] (0x40007f52c0) (1) Data frame handling\nI0821 12:18:36.450710     643 log.go:172] (0x400003a4d0) Data frame received for 5\nI0821 12:18:36.450958     643 log.go:172] (0x40007f54a0) (5) Data frame handling\nI0821 12:18:36.451263     643 log.go:172] (0x4000756000) (3) Data frame handling\nI0821 12:18:36.456563     643 log.go:172] (0x40007f54a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.16 31523\nConnection to 172.18.0.16 31523 port [tcp/31523] succeeded!\nI0821 12:18:36.457104     643 log.go:172] (0x40007f52c0) (1) Data frame sent\nI0821 12:18:36.457705     643 log.go:172] (0x400003a4d0) Data frame received for 5\nI0821 12:18:36.457817     643 log.go:172] (0x40007f54a0) (5) Data frame handling\nI0821 12:18:36.458887     643 log.go:172] (0x400003a4d0) (0x40007f52c0) Stream removed, broadcasting: 1\nI0821 12:18:36.459949     643 log.go:172] (0x400003a4d0) Go away received\nI0821 12:18:36.462419     643 log.go:172] (0x400003a4d0) (0x40007f52c0) Stream removed, broadcasting: 1\nI0821 12:18:36.463183     643 log.go:172] (0x400003a4d0) (0x4000756000) Stream removed, broadcasting: 3\nI0821 12:18:36.463819     643 log.go:172] (0x400003a4d0) (0x40007f54a0) Stream removed, broadcasting: 5\n"
Aug 21 12:18:36.475: INFO: stdout: ""
Aug 21 12:18:36.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-8472 execpod4q6m9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31523'
Aug 21 12:18:37.948: INFO: stderr: "I0821 12:18:37.818938     665 log.go:172] (0x4000a880b0) (0x40008114a0) Create stream\nI0821 12:18:37.821313     665 log.go:172] (0x4000a880b0) (0x40008114a0) Stream added, broadcasting: 1\nI0821 12:18:37.835226     665 log.go:172] (0x4000a880b0) Reply frame received for 1\nI0821 12:18:37.836494     665 log.go:172] (0x4000a880b0) (0x4000811540) Create stream\nI0821 12:18:37.836610     665 log.go:172] (0x4000a880b0) (0x4000811540) Stream added, broadcasting: 3\nI0821 12:18:37.838300     665 log.go:172] (0x4000a880b0) Reply frame received for 3\nI0821 12:18:37.838647     665 log.go:172] (0x4000a880b0) (0x40008115e0) Create stream\nI0821 12:18:37.838719     665 log.go:172] (0x4000a880b0) (0x40008115e0) Stream added, broadcasting: 5\nI0821 12:18:37.840214     665 log.go:172] (0x4000a880b0) Reply frame received for 5\nI0821 12:18:37.923336     665 log.go:172] (0x4000a880b0) Data frame received for 3\nI0821 12:18:37.923900     665 log.go:172] (0x4000a880b0) Data frame received for 1\nI0821 12:18:37.924106     665 log.go:172] (0x4000811540) (3) Data frame handling\nI0821 12:18:37.924257     665 log.go:172] (0x40008114a0) (1) Data frame handling\nI0821 12:18:37.924479     665 log.go:172] (0x4000a880b0) Data frame received for 5\nI0821 12:18:37.924624     665 log.go:172] (0x40008115e0) (5) Data frame handling\nI0821 12:18:37.928247     665 log.go:172] (0x40008114a0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 31523\nConnection to 172.18.0.13 31523 port [tcp/31523] succeeded!\nI0821 12:18:37.928478     665 log.go:172] (0x40008115e0) (5) Data frame sent\nI0821 12:18:37.928598     665 log.go:172] (0x4000a880b0) Data frame received for 5\nI0821 12:18:37.928681     665 log.go:172] (0x40008115e0) (5) Data frame handling\nI0821 12:18:37.930986     665 log.go:172] (0x4000a880b0) (0x40008114a0) Stream removed, broadcasting: 1\nI0821 12:18:37.931817     665 log.go:172] (0x4000a880b0) Go away received\nI0821 12:18:37.936218     665 log.go:172] (0x4000a880b0) (0x40008114a0) Stream removed, broadcasting: 1\nI0821 12:18:37.936667     665 log.go:172] (0x4000a880b0) (0x4000811540) Stream removed, broadcasting: 3\nI0821 12:18:37.937149     665 log.go:172] (0x4000a880b0) (0x40008115e0) Stream removed, broadcasting: 5\n"
Aug 21 12:18:37.949: INFO: stdout: ""
Aug 21 12:18:37.949: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:18:38.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8472" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:18.564 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":70,"skipped":1097,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:18:38.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:18:38.223: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 21 12:18:38.503: INFO: Waiting up to 5m0s for pod "pod-102c07f1-f6fc-4e9f-9c6a-81da949c2013" in namespace "emptydir-7688" to be "Succeeded or Failed"
Aug 21 12:18:38.551: INFO: Pod "pod-102c07f1-f6fc-4e9f-9c6a-81da949c2013": Phase="Pending", Reason="", readiness=false. Elapsed: 47.856238ms
Aug 21 12:18:40.614: INFO: Pod "pod-102c07f1-f6fc-4e9f-9c6a-81da949c2013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111228846s
Aug 21 12:18:42.622: INFO: Pod "pod-102c07f1-f6fc-4e9f-9c6a-81da949c2013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119564549s
STEP: Saw pod success
Aug 21 12:18:42.623: INFO: Pod "pod-102c07f1-f6fc-4e9f-9c6a-81da949c2013" satisfied condition "Succeeded or Failed"
Aug 21 12:18:42.628: INFO: Trying to get logs from node kali-worker2 pod pod-102c07f1-f6fc-4e9f-9c6a-81da949c2013 container test-container: 
STEP: delete the pod
Aug 21 12:18:42.702: INFO: Waiting for pod pod-102c07f1-f6fc-4e9f-9c6a-81da949c2013 to disappear
Aug 21 12:18:42.710: INFO: Pod pod-102c07f1-f6fc-4e9f-9c6a-81da949c2013 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:18:42.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7688" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1143,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:18:42.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:18:42.820: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 12:18:44.827: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 12:18:46.826: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 12:18:48.828: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:18:50.826: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:18:52.828: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:18:54.826: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:18:56.941: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:18:58.829: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:19:00.828: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:19:02.827: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:19:04.826: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:19:06.827: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:19:08.826: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = false)
Aug 21 12:19:10.827: INFO: The status of Pod test-webserver-2ecf5c60-98fd-44bc-90a2-6b7cf6202320 is Running (Ready = true)
Aug 21 12:19:10.833: INFO: Container started at 2020-08-21 12:18:46 +0000 UTC, pod became ready at 2020-08-21 12:19:08 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:19:10.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7633" for this suite.

• [SLOW TEST:28.123 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1145,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:19:10.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:19:10.977: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-071f81fa-aa7d-4222-bbdb-20258554fb51" in namespace "security-context-test-1369" to be "Succeeded or Failed"
Aug 21 12:19:10.986: INFO: Pod "busybox-privileged-false-071f81fa-aa7d-4222-bbdb-20258554fb51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.621795ms
Aug 21 12:19:13.102: INFO: Pod "busybox-privileged-false-071f81fa-aa7d-4222-bbdb-20258554fb51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124465162s
Aug 21 12:19:15.138: INFO: Pod "busybox-privileged-false-071f81fa-aa7d-4222-bbdb-20258554fb51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16015448s
Aug 21 12:19:15.138: INFO: Pod "busybox-privileged-false-071f81fa-aa7d-4222-bbdb-20258554fb51" satisfied condition "Succeeded or Failed"
Aug 21 12:19:15.148: INFO: Got logs for pod "busybox-privileged-false-071f81fa-aa7d-4222-bbdb-20258554fb51": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:19:15.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1369" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1148,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:19:15.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:19:15.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51702ae7-f03c-4c70-9b20-ca93aae77047" in namespace "downward-api-7145" to be "Succeeded or Failed"
Aug 21 12:19:15.283: INFO: Pod "downwardapi-volume-51702ae7-f03c-4c70-9b20-ca93aae77047": Phase="Pending", Reason="", readiness=false. Elapsed: 47.70609ms
Aug 21 12:19:17.291: INFO: Pod "downwardapi-volume-51702ae7-f03c-4c70-9b20-ca93aae77047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055386396s
Aug 21 12:19:19.297: INFO: Pod "downwardapi-volume-51702ae7-f03c-4c70-9b20-ca93aae77047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062236086s
STEP: Saw pod success
Aug 21 12:19:19.298: INFO: Pod "downwardapi-volume-51702ae7-f03c-4c70-9b20-ca93aae77047" satisfied condition "Succeeded or Failed"
Aug 21 12:19:19.303: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-51702ae7-f03c-4c70-9b20-ca93aae77047 container client-container: 
STEP: delete the pod
Aug 21 12:19:19.346: INFO: Waiting for pod downwardapi-volume-51702ae7-f03c-4c70-9b20-ca93aae77047 to disappear
Aug 21 12:19:19.356: INFO: Pod downwardapi-volume-51702ae7-f03c-4c70-9b20-ca93aae77047 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:19:19.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7145" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1150,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:19:19.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0821 12:19:29.663131      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 12:19:29.663: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:19:29.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2456" for this suite.

• [SLOW TEST:10.274 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":76,"skipped":1160,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:19:29.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:19:29.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918" in namespace "downward-api-2127" to be "Succeeded or Failed"
Aug 21 12:19:29.776: INFO: Pod "downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918": Phase="Pending", Reason="", readiness=false. Elapsed: 32.380043ms
Aug 21 12:19:31.918: INFO: Pod "downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174881218s
Aug 21 12:19:33.926: INFO: Pod "downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182916963s
Aug 21 12:19:35.976: INFO: Pod "downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918": Phase="Running", Reason="", readiness=true. Elapsed: 6.232597872s
Aug 21 12:19:37.982: INFO: Pod "downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.23854312s
STEP: Saw pod success
Aug 21 12:19:37.982: INFO: Pod "downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918" satisfied condition "Succeeded or Failed"
Aug 21 12:19:37.987: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918 container client-container: 
STEP: delete the pod
Aug 21 12:19:38.061: INFO: Waiting for pod downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918 to disappear
Aug 21 12:19:38.087: INFO: Pod downwardapi-volume-ea904af0-5466-4fbe-9354-46d100254918 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:19:38.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2127" for this suite.

• [SLOW TEST:8.423 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1161,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:19:38.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:19:46.627: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:19:48.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609186, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609186, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609187, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609186, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:19:50.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609186, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609186, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609187, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609186, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:19:52.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609186, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609186, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609187, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609186, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:19:56.067: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:19:56.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-796" for this suite.
STEP: Destroying namespace "webhook-796-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.424 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":78,"skipped":1234,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:19:56.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:20:30.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6994" for this suite.
STEP: Destroying namespace "nsdeletetest-438" for this suite.
Aug 21 12:20:30.348: INFO: Namespace nsdeletetest-438 was already deleted
STEP: Destroying namespace "nsdeletetest-1947" for this suite.

• [SLOW TEST:33.827 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":79,"skipped":1276,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:20:30.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 21 12:20:30.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4332'
Aug 21 12:20:31.664: INFO: stderr: ""
Aug 21 12:20:31.664: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 21 12:20:36.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4332 -o json'
Aug 21 12:20:37.945: INFO: stderr: ""
Aug 21 12:20:37.946: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-21T12:20:31Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-21T12:20:31Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.55\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-21T12:20:34Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-4332\",\n        \"resourceVersion\": \"2113978\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4332/pods/e2e-test-httpd-pod\",\n        \"uid\": \"aeda1dc1-163f-4aee-9cda-3b249a2197b4\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-lzwfs\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-lzwfs\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-lzwfs\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T12:20:31Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T12:20:34Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T12:20:34Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-21T12:20:31Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://1c25c170a7e7aab863be98e130db9df7b6356b14a8c74407fb2e8dd08a85f736\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-21T12:20:34Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.16\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.55\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.55\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-21T12:20:31Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 21 12:20:37.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4332'
Aug 21 12:20:39.660: INFO: stderr: ""
Aug 21 12:20:39.660: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Aug 21 12:20:39.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4332'
Aug 21 12:20:49.153: INFO: stderr: ""
Aug 21 12:20:49.153: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:20:49.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4332" for this suite.

• [SLOW TEST:18.799 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":80,"skipped":1300,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:20:49.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5242.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5242.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5242.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5242.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5242.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5242.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 12:20:55.302: INFO: DNS probes using dns-5242/dns-test-9ba8062c-8e45-46fb-b7e6-660d44792afe succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:20:55.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5242" for this suite.

• [SLOW TEST:6.200 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":81,"skipped":1316,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:20:55.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-00331a74-f504-44fe-bbde-ef98bdd02d2e in namespace container-probe-6661
Aug 21 12:21:01.905: INFO: Started pod liveness-00331a74-f504-44fe-bbde-ef98bdd02d2e in namespace container-probe-6661
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 12:21:01.909: INFO: Initial restart count of pod liveness-00331a74-f504-44fe-bbde-ef98bdd02d2e is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:25:03.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6661" for this suite.

• [SLOW TEST:248.105 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1325,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:25:03.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 21 12:25:03.752: INFO: Waiting up to 5m0s for pod "pod-c409b11c-1a40-44e7-ba60-390c1c94942d" in namespace "emptydir-9358" to be "Succeeded or Failed"
Aug 21 12:25:03.940: INFO: Pod "pod-c409b11c-1a40-44e7-ba60-390c1c94942d": Phase="Pending", Reason="", readiness=false. Elapsed: 187.37525ms
Aug 21 12:25:05.946: INFO: Pod "pod-c409b11c-1a40-44e7-ba60-390c1c94942d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193880795s
Aug 21 12:25:07.953: INFO: Pod "pod-c409b11c-1a40-44e7-ba60-390c1c94942d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.200572369s
STEP: Saw pod success
Aug 21 12:25:07.953: INFO: Pod "pod-c409b11c-1a40-44e7-ba60-390c1c94942d" satisfied condition "Succeeded or Failed"
Aug 21 12:25:07.958: INFO: Trying to get logs from node kali-worker2 pod pod-c409b11c-1a40-44e7-ba60-390c1c94942d container test-container: 
STEP: delete the pod
Aug 21 12:25:08.169: INFO: Waiting for pod pod-c409b11c-1a40-44e7-ba60-390c1c94942d to disappear
Aug 21 12:25:08.194: INFO: Pod pod-c409b11c-1a40-44e7-ba60-390c1c94942d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:25:08.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9358" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1365,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:25:08.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-fdc05b2d-3cbd-482a-8b68-5c80685f9c1d
STEP: Creating a pod to test consume secrets
Aug 21 12:25:08.438: INFO: Waiting up to 5m0s for pod "pod-secrets-a9ebb83a-94fe-45ab-8bfd-d4ab0f468fae" in namespace "secrets-6134" to be "Succeeded or Failed"
Aug 21 12:25:08.480: INFO: Pod "pod-secrets-a9ebb83a-94fe-45ab-8bfd-d4ab0f468fae": Phase="Pending", Reason="", readiness=false. Elapsed: 42.177256ms
Aug 21 12:25:10.532: INFO: Pod "pod-secrets-a9ebb83a-94fe-45ab-8bfd-d4ab0f468fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093916393s
Aug 21 12:25:12.539: INFO: Pod "pod-secrets-a9ebb83a-94fe-45ab-8bfd-d4ab0f468fae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100604155s
STEP: Saw pod success
Aug 21 12:25:12.539: INFO: Pod "pod-secrets-a9ebb83a-94fe-45ab-8bfd-d4ab0f468fae" satisfied condition "Succeeded or Failed"
Aug 21 12:25:12.544: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-a9ebb83a-94fe-45ab-8bfd-d4ab0f468fae container secret-volume-test: 
STEP: delete the pod
Aug 21 12:25:12.601: INFO: Waiting for pod pod-secrets-a9ebb83a-94fe-45ab-8bfd-d4ab0f468fae to disappear
Aug 21 12:25:12.615: INFO: Pod pod-secrets-a9ebb83a-94fe-45ab-8bfd-d4ab0f468fae no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:25:12.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6134" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1371,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:25:12.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:25:16.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3955" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1382,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:25:16.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-4b650489-d3e2-48f3-a706-f5e8b8b7faf1
Aug 21 12:25:16.890: INFO: Pod name my-hostname-basic-4b650489-d3e2-48f3-a706-f5e8b8b7faf1: Found 0 pods out of 1
Aug 21 12:25:21.912: INFO: Pod name my-hostname-basic-4b650489-d3e2-48f3-a706-f5e8b8b7faf1: Found 1 pods out of 1
Aug 21 12:25:21.912: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4b650489-d3e2-48f3-a706-f5e8b8b7faf1" are running
Aug 21 12:25:21.920: INFO: Pod "my-hostname-basic-4b650489-d3e2-48f3-a706-f5e8b8b7faf1-kbx47" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 12:25:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 12:25:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 12:25:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 12:25:16 +0000 UTC Reason: Message:}])
Aug 21 12:25:21.921: INFO: Trying to dial the pod
Aug 21 12:25:26.946: INFO: Controller my-hostname-basic-4b650489-d3e2-48f3-a706-f5e8b8b7faf1: Got expected result from replica 1 [my-hostname-basic-4b650489-d3e2-48f3-a706-f5e8b8b7faf1-kbx47]: "my-hostname-basic-4b650489-d3e2-48f3-a706-f5e8b8b7faf1-kbx47", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:25:26.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8397" for this suite.

• [SLOW TEST:10.184 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":86,"skipped":1410,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:25:26.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:25:27.065: INFO: Creating ReplicaSet my-hostname-basic-f5741622-d1b9-4c78-afce-b3a479b81b56
Aug 21 12:25:27.108: INFO: Pod name my-hostname-basic-f5741622-d1b9-4c78-afce-b3a479b81b56: Found 0 pods out of 1
Aug 21 12:25:32.121: INFO: Pod name my-hostname-basic-f5741622-d1b9-4c78-afce-b3a479b81b56: Found 1 pods out of 1
Aug 21 12:25:32.121: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f5741622-d1b9-4c78-afce-b3a479b81b56" is running
Aug 21 12:25:32.129: INFO: Pod "my-hostname-basic-f5741622-d1b9-4c78-afce-b3a479b81b56-jd8vp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 12:25:27 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 12:25:30 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 12:25:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 12:25:27 +0000 UTC Reason: Message:}])
Aug 21 12:25:32.129: INFO: Trying to dial the pod
Aug 21 12:25:37.147: INFO: Controller my-hostname-basic-f5741622-d1b9-4c78-afce-b3a479b81b56: Got expected result from replica 1 [my-hostname-basic-f5741622-d1b9-4c78-afce-b3a479b81b56-jd8vp]: "my-hostname-basic-f5741622-d1b9-4c78-afce-b3a479b81b56-jd8vp", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:25:37.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7096" for this suite.

• [SLOW TEST:10.200 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":87,"skipped":1422,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:25:37.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 21 12:25:41.275: INFO: &Pod{ObjectMeta:{send-events-10d84b4e-c9fb-4b5d-88f9-1363c8fe848a  events-3356 /api/v1/namespaces/events-3356/pods/send-events-10d84b4e-c9fb-4b5d-88f9-1363c8fe848a adc208a4-161e-432c-a129-5530845953ec 2115084 0 2020-08-21 12:25:37 +0000 UTC   map[name:foo time:225521027] map[] [] []  [{e2e.test Update v1 2020-08-21 12:25:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 12:25:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wb5p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wb5p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wb5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:25:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:25:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:25:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:25:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.155,StartTime:2020-08-21 12:25:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 12:25:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://ba2a31794ecae13b45b0cd3ae331c69e3ccac162cf04002a123b61650f33c4f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.155,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 21 12:25:43.288: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 21 12:25:45.298: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:25:45.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3356" for this suite.

• [SLOW TEST:8.171 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":88,"skipped":1431,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:25:45.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-77622fa8-bf57-4097-bbd7-221a96c5dda7
STEP: Creating a pod to test consume configMaps
Aug 21 12:25:45.431: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-564b58c8-7496-4cde-9917-74b4b96e3f70" in namespace "projected-1618" to be "Succeeded or Failed"
Aug 21 12:25:45.457: INFO: Pod "pod-projected-configmaps-564b58c8-7496-4cde-9917-74b4b96e3f70": Phase="Pending", Reason="", readiness=false. Elapsed: 26.204631ms
Aug 21 12:25:47.465: INFO: Pod "pod-projected-configmaps-564b58c8-7496-4cde-9917-74b4b96e3f70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034149393s
Aug 21 12:25:49.472: INFO: Pod "pod-projected-configmaps-564b58c8-7496-4cde-9917-74b4b96e3f70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041133101s
STEP: Saw pod success
Aug 21 12:25:49.473: INFO: Pod "pod-projected-configmaps-564b58c8-7496-4cde-9917-74b4b96e3f70" satisfied condition "Succeeded or Failed"
Aug 21 12:25:49.477: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-564b58c8-7496-4cde-9917-74b4b96e3f70 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 12:25:49.515: INFO: Waiting for pod pod-projected-configmaps-564b58c8-7496-4cde-9917-74b4b96e3f70 to disappear
Aug 21 12:25:49.519: INFO: Pod pod-projected-configmaps-564b58c8-7496-4cde-9917-74b4b96e3f70 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:25:49.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1618" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1494,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:25:49.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:25:51.658: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:25:53.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609551, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609551, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609551, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609551, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:25:56.721: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:25:56.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:25:57.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6206" for this suite.
STEP: Destroying namespace "webhook-6206-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.561 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":90,"skipped":1514,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:25:58.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 21 12:25:58.266: INFO: Waiting up to 5m0s for pod "downward-api-fb21fdc0-f597-4a90-8936-a24b634fe7f3" in namespace "downward-api-9377" to be "Succeeded or Failed"
Aug 21 12:25:58.522: INFO: Pod "downward-api-fb21fdc0-f597-4a90-8936-a24b634fe7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 256.228588ms
Aug 21 12:26:00.531: INFO: Pod "downward-api-fb21fdc0-f597-4a90-8936-a24b634fe7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264465126s
Aug 21 12:26:02.537: INFO: Pod "downward-api-fb21fdc0-f597-4a90-8936-a24b634fe7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.270567888s
STEP: Saw pod success
Aug 21 12:26:02.537: INFO: Pod "downward-api-fb21fdc0-f597-4a90-8936-a24b634fe7f3" satisfied condition "Succeeded or Failed"
Aug 21 12:26:02.540: INFO: Trying to get logs from node kali-worker pod downward-api-fb21fdc0-f597-4a90-8936-a24b634fe7f3 container dapi-container: 
STEP: delete the pod
Aug 21 12:26:02.579: INFO: Waiting for pod downward-api-fb21fdc0-f597-4a90-8936-a24b634fe7f3 to disappear
Aug 21 12:26:02.584: INFO: Pod downward-api-fb21fdc0-f597-4a90-8936-a24b634fe7f3 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:26:02.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9377" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1551,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:26:02.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 21 12:26:02.683: INFO: Waiting up to 5m0s for pod "downward-api-f08a9b4b-d35b-47fa-9d7f-d69ed13503f6" in namespace "downward-api-720" to be "Succeeded or Failed"
Aug 21 12:26:02.687: INFO: Pod "downward-api-f08a9b4b-d35b-47fa-9d7f-d69ed13503f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.698667ms
Aug 21 12:26:04.693: INFO: Pod "downward-api-f08a9b4b-d35b-47fa-9d7f-d69ed13503f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010108258s
Aug 21 12:26:06.701: INFO: Pod "downward-api-f08a9b4b-d35b-47fa-9d7f-d69ed13503f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018229511s
STEP: Saw pod success
Aug 21 12:26:06.701: INFO: Pod "downward-api-f08a9b4b-d35b-47fa-9d7f-d69ed13503f6" satisfied condition "Succeeded or Failed"
Aug 21 12:26:06.707: INFO: Trying to get logs from node kali-worker pod downward-api-f08a9b4b-d35b-47fa-9d7f-d69ed13503f6 container dapi-container: 
STEP: delete the pod
Aug 21 12:26:06.762: INFO: Waiting for pod downward-api-f08a9b4b-d35b-47fa-9d7f-d69ed13503f6 to disappear
Aug 21 12:26:06.766: INFO: Pod downward-api-f08a9b4b-d35b-47fa-9d7f-d69ed13503f6 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:26:06.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-720" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1561,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:26:06.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:26:23.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7869" for this suite.

• [SLOW TEST:16.369 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":93,"skipped":1593,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:26:23.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:26:27.427: INFO: Waiting up to 5m0s for pod "client-envvars-2e230dbd-d236-4aa4-a7a8-ee1ccdaac2b0" in namespace "pods-6528" to be "Succeeded or Failed"
Aug 21 12:26:27.451: INFO: Pod "client-envvars-2e230dbd-d236-4aa4-a7a8-ee1ccdaac2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 23.158532ms
Aug 21 12:26:29.579: INFO: Pod "client-envvars-2e230dbd-d236-4aa4-a7a8-ee1ccdaac2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151816143s
Aug 21 12:26:31.585: INFO: Pod "client-envvars-2e230dbd-d236-4aa4-a7a8-ee1ccdaac2b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15788993s
STEP: Saw pod success
Aug 21 12:26:31.585: INFO: Pod "client-envvars-2e230dbd-d236-4aa4-a7a8-ee1ccdaac2b0" satisfied condition "Succeeded or Failed"
Aug 21 12:26:31.590: INFO: Trying to get logs from node kali-worker2 pod client-envvars-2e230dbd-d236-4aa4-a7a8-ee1ccdaac2b0 container env3cont: 
STEP: delete the pod
Aug 21 12:26:31.660: INFO: Waiting for pod client-envvars-2e230dbd-d236-4aa4-a7a8-ee1ccdaac2b0 to disappear
Aug 21 12:26:31.677: INFO: Pod client-envvars-2e230dbd-d236-4aa4-a7a8-ee1ccdaac2b0 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:26:31.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6528" for this suite.

• [SLOW TEST:8.522 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1604,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:26:31.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9243
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-9243
I0821 12:26:31.955139      10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9243, replica count: 2
I0821 12:26:35.006769      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 12:26:38.007522      10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 12:26:38.007: INFO: Creating new exec pod
Aug 21 12:26:43.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-9243 execpod6n2v4 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 21 12:26:48.251: INFO: stderr: "I0821 12:26:48.114160     777 log.go:172] (0x400003bc30) (0x4000b50000) Create stream\nI0821 12:26:48.117234     777 log.go:172] (0x400003bc30) (0x4000b50000) Stream added, broadcasting: 1\nI0821 12:26:48.126520     777 log.go:172] (0x400003bc30) Reply frame received for 1\nI0821 12:26:48.127056     777 log.go:172] (0x400003bc30) (0x4000b3c000) Create stream\nI0821 12:26:48.127141     777 log.go:172] (0x400003bc30) (0x4000b3c000) Stream added, broadcasting: 3\nI0821 12:26:48.129433     777 log.go:172] (0x400003bc30) Reply frame received for 3\nI0821 12:26:48.130020     777 log.go:172] (0x400003bc30) (0x40008f80a0) Create stream\nI0821 12:26:48.130145     777 log.go:172] (0x400003bc30) (0x40008f80a0) Stream added, broadcasting: 5\nI0821 12:26:48.132132     777 log.go:172] (0x400003bc30) Reply frame received for 5\nI0821 12:26:48.226402     777 log.go:172] (0x400003bc30) Data frame received for 3\nI0821 12:26:48.226852     777 log.go:172] (0x400003bc30) Data frame received for 5\nI0821 12:26:48.226994     777 log.go:172] (0x400003bc30) Data frame received for 1\nI0821 12:26:48.227081     777 log.go:172] (0x4000b50000) (1) Data frame handling\nI0821 12:26:48.227208     777 log.go:172] (0x40008f80a0) (5) Data frame handling\nI0821 12:26:48.227431     777 log.go:172] (0x4000b3c000) (3) Data frame handling\nI0821 12:26:48.228281     777 log.go:172] (0x4000b50000) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0821 12:26:48.229260     777 log.go:172] (0x40008f80a0) (5) Data frame sent\nI0821 12:26:48.229553     777 log.go:172] (0x400003bc30) Data frame received for 5\nI0821 12:26:48.229625     777 log.go:172] (0x40008f80a0) (5) Data frame handling\nI0821 12:26:48.232568     777 log.go:172] (0x400003bc30) (0x4000b50000) Stream removed, broadcasting: 1\nI0821 12:26:48.233563     777 log.go:172] (0x400003bc30) Go away received\nI0821 12:26:48.236609     777 log.go:172] (0x400003bc30) (0x4000b50000) Stream removed, broadcasting: 1\nI0821 12:26:48.237080     777 log.go:172] (0x400003bc30) (0x4000b3c000) Stream removed, broadcasting: 3\nI0821 12:26:48.237482     777 log.go:172] (0x400003bc30) (0x40008f80a0) Stream removed, broadcasting: 5\n"
Aug 21 12:26:48.251: INFO: stdout: ""
Aug 21 12:26:48.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-9243 execpod6n2v4 -- /bin/sh -x -c nc -zv -t -w 2 10.101.24.163 80'
Aug 21 12:26:49.702: INFO: stderr: "I0821 12:26:49.590608     811 log.go:172] (0x40000e7130) (0x40009f81e0) Create stream\nI0821 12:26:49.594633     811 log.go:172] (0x40000e7130) (0x40009f81e0) Stream added, broadcasting: 1\nI0821 12:26:49.610411     811 log.go:172] (0x40000e7130) Reply frame received for 1\nI0821 12:26:49.611093     811 log.go:172] (0x40000e7130) (0x40009f8280) Create stream\nI0821 12:26:49.611164     811 log.go:172] (0x40000e7130) (0x40009f8280) Stream added, broadcasting: 3\nI0821 12:26:49.613051     811 log.go:172] (0x40000e7130) Reply frame received for 3\nI0821 12:26:49.613356     811 log.go:172] (0x40000e7130) (0x400073a000) Create stream\nI0821 12:26:49.613443     811 log.go:172] (0x40000e7130) (0x400073a000) Stream added, broadcasting: 5\nI0821 12:26:49.614624     811 log.go:172] (0x40000e7130) Reply frame received for 5\nI0821 12:26:49.679034     811 log.go:172] (0x40000e7130) Data frame received for 3\nI0821 12:26:49.679345     811 log.go:172] (0x40000e7130) Data frame received for 5\nI0821 12:26:49.679575     811 log.go:172] (0x40009f8280) (3) Data frame handling\nI0821 12:26:49.679697     811 log.go:172] (0x40000e7130) Data frame received for 1\nI0821 12:26:49.679848     811 log.go:172] (0x40009f81e0) (1) Data frame handling\nI0821 12:26:49.681667     811 log.go:172] (0x40009f81e0) (1) Data frame sent\nI0821 12:26:49.682107     811 log.go:172] (0x400073a000) (5) Data frame handling\nI0821 12:26:49.682296     811 log.go:172] (0x400073a000) (5) Data frame sent\nI0821 12:26:49.682451     811 log.go:172] (0x40000e7130) Data frame received for 5\nI0821 12:26:49.682592     811 log.go:172] (0x400073a000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.24.163 80\nConnection to 10.101.24.163 80 port [tcp/http] succeeded!\nI0821 12:26:49.684026     811 log.go:172] (0x40000e7130) (0x40009f81e0) Stream removed, broadcasting: 1\nI0821 12:26:49.687498     811 log.go:172] (0x40000e7130) Go away received\nI0821 12:26:49.689900     811 log.go:172] (0x40000e7130) (0x40009f81e0) Stream removed, broadcasting: 1\nI0821 12:26:49.690719     811 log.go:172] (0x40000e7130) (0x40009f8280) Stream removed, broadcasting: 3\nI0821 12:26:49.690987     811 log.go:172] (0x40000e7130) (0x400073a000) Stream removed, broadcasting: 5\n"
Aug 21 12:26:49.703: INFO: stdout: ""
Aug 21 12:26:49.703: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:26:49.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9243" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:18.129 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":95,"skipped":1620,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:26:49.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 21 12:26:49.909: INFO: Waiting up to 5m0s for pod "pod-05477039-6628-4c32-92ec-99e66ae17906" in namespace "emptydir-3489" to be "Succeeded or Failed"
Aug 21 12:26:49.932: INFO: Pod "pod-05477039-6628-4c32-92ec-99e66ae17906": Phase="Pending", Reason="", readiness=false. Elapsed: 23.091983ms
Aug 21 12:26:51.939: INFO: Pod "pod-05477039-6628-4c32-92ec-99e66ae17906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030643545s
Aug 21 12:26:53.947: INFO: Pod "pod-05477039-6628-4c32-92ec-99e66ae17906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038190917s
STEP: Saw pod success
Aug 21 12:26:53.947: INFO: Pod "pod-05477039-6628-4c32-92ec-99e66ae17906" satisfied condition "Succeeded or Failed"
Aug 21 12:26:53.953: INFO: Trying to get logs from node kali-worker pod pod-05477039-6628-4c32-92ec-99e66ae17906 container test-container: 
STEP: delete the pod
Aug 21 12:26:54.110: INFO: Waiting for pod pod-05477039-6628-4c32-92ec-99e66ae17906 to disappear
Aug 21 12:26:54.120: INFO: Pod pod-05477039-6628-4c32-92ec-99e66ae17906 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:26:54.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3489" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1623,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:26:54.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-4d980205-e5d0-4975-aca1-fb62dd94e0d6
STEP: Creating a pod to test consume secrets
Aug 21 12:26:54.369: INFO: Waiting up to 5m0s for pod "pod-secrets-111f3c15-90a8-4b7d-aea0-b32a33a252eb" in namespace "secrets-4120" to be "Succeeded or Failed"
Aug 21 12:26:54.425: INFO: Pod "pod-secrets-111f3c15-90a8-4b7d-aea0-b32a33a252eb": Phase="Pending", Reason="", readiness=false. Elapsed: 56.195469ms
Aug 21 12:26:56.432: INFO: Pod "pod-secrets-111f3c15-90a8-4b7d-aea0-b32a33a252eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063177363s
Aug 21 12:26:58.438: INFO: Pod "pod-secrets-111f3c15-90a8-4b7d-aea0-b32a33a252eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068631143s
Aug 21 12:27:00.445: INFO: Pod "pod-secrets-111f3c15-90a8-4b7d-aea0-b32a33a252eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075965554s
STEP: Saw pod success
Aug 21 12:27:00.445: INFO: Pod "pod-secrets-111f3c15-90a8-4b7d-aea0-b32a33a252eb" satisfied condition "Succeeded or Failed"
Aug 21 12:27:00.451: INFO: Trying to get logs from node kali-worker pod pod-secrets-111f3c15-90a8-4b7d-aea0-b32a33a252eb container secret-volume-test: 
STEP: delete the pod
Aug 21 12:27:00.477: INFO: Waiting for pod pod-secrets-111f3c15-90a8-4b7d-aea0-b32a33a252eb to disappear
Aug 21 12:27:00.497: INFO: Pod pod-secrets-111f3c15-90a8-4b7d-aea0-b32a33a252eb no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:27:00.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4120" for this suite.
STEP: Destroying namespace "secret-namespace-1730" for this suite.

• [SLOW TEST:6.380 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1631,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:27:00.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:27:00.599: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 21 12:27:00.625: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 21 12:27:05.644: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 21 12:27:05.645: INFO: Creating deployment "test-rolling-update-deployment"
Aug 21 12:27:05.673: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 21 12:27:05.703: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 21 12:27:07.719: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 21 12:27:07.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609625, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609625, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609625, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609625, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:27:09.743: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 21 12:27:09.758: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-1325 /apis/apps/v1/namespaces/deployment-1325/deployments/test-rolling-update-deployment 33937980-2ca0-4e58-b31b-23cc27c58689 2115812 1 2020-08-21 12:27:05 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-08-21 12:27:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 12:27:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003443328  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-21 12:27:05 +0000 UTC,LastTransitionTime:2020-08-21 12:27:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-08-21 12:27:08 +0000 UTC,LastTransitionTime:2020-08-21 12:27:05 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 21 12:27:09.767: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-1325 /apis/apps/v1/namespaces/deployment-1325/replicasets/test-rolling-update-deployment-59d5cb45c7 5757be90-61d8-4dcd-af48-884b62f3376b 2115801 1 2020-08-21 12:27:05 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 33937980-2ca0-4e58-b31b-23cc27c58689 0x4003443887 0x4003443888}] []  [{kube-controller-manager Update apps/v1 2020-08-21 12:27:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 51 57 51 55 57 56 48 45 50 99 97 48 45 52 101 53 56 45 98 51 49 98 45 50 51 99 99 50 55 99 53 56 54 56 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003443918  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 21 12:27:09.768: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 21 12:27:09.769: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-1325 /apis/apps/v1/namespaces/deployment-1325/replicasets/test-rolling-update-controller a71e70ce-379b-4771-a7f1-51a6452bf785 2115811 2 2020-08-21 12:27:00 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 33937980-2ca0-4e58-b31b-23cc27c58689 0x400344376f 0x4003443780}] []  [{e2e.test Update apps/v1 2020-08-21 12:27:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 12:27:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 51 57 51 55 57 56 48 45 50 99 97 48 45 52 101 53 56 45 98 51 49 98 45 50 51 99 99 50 55 99 53 56 54 56 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4003443818  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 12:27:09.777: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-ghrbc" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-ghrbc test-rolling-update-deployment-59d5cb45c7- deployment-1325 /api/v1/namespaces/deployment-1325/pods/test-rolling-update-deployment-59d5cb45c7-ghrbc d1427f6c-d6b5-48ce-9473-13ae93063e8a 2115800 0 2020-08-21 12:27:05 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 5757be90-61d8-4dcd-af48-884b62f3376b 0x40035127f7 0x40035127f8}] []  [{kube-controller-manager Update v1 2020-08-21 12:27:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 55 53 55 98 101 57 48 45 54 49 100 56 45 52 100 99 100 45 97 102 52 56 45 56 56 52 98 54 50 102 51 51 55 54 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 12:27:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 55 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mkhch,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mkhch,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mkhch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:27:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:27:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:27:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 12:27:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.70,StartTime:2020-08-21 12:27:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 12:27:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://43662d7d2ac50706ce2fc599baa65f71741b1ad52c3810b18745fbcdd95c6a4a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:27:09.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1325" for this suite.

• [SLOW TEST:9.275 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":98,"skipped":1631,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:27:09.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:27:09.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8483" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":99,"skipped":1649,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:27:09.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug 21 12:27:10.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5474'
Aug 21 12:27:11.731: INFO: stderr: ""
Aug 21 12:27:11.731: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 12:27:11.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5474'
Aug 21 12:27:13.006: INFO: stderr: ""
Aug 21 12:27:13.006: INFO: stdout: "update-demo-nautilus-2dj8q update-demo-nautilus-7t2k6 "
Aug 21 12:27:13.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2dj8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5474'
Aug 21 12:27:14.237: INFO: stderr: ""
Aug 21 12:27:14.237: INFO: stdout: ""
Aug 21 12:27:14.238: INFO: update-demo-nautilus-2dj8q is created but not running
Aug 21 12:27:19.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5474'
Aug 21 12:27:20.545: INFO: stderr: ""
Aug 21 12:27:20.545: INFO: stdout: "update-demo-nautilus-2dj8q update-demo-nautilus-7t2k6 "
Aug 21 12:27:20.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2dj8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5474'
Aug 21 12:27:21.827: INFO: stderr: ""
Aug 21 12:27:21.827: INFO: stdout: "true"
Aug 21 12:27:21.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2dj8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5474'
Aug 21 12:27:23.075: INFO: stderr: ""
Aug 21 12:27:23.075: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 12:27:23.076: INFO: validating pod update-demo-nautilus-2dj8q
Aug 21 12:27:23.084: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 12:27:23.084: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 12:27:23.085: INFO: update-demo-nautilus-2dj8q is verified up and running
Aug 21 12:27:23.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7t2k6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5474'
Aug 21 12:27:24.353: INFO: stderr: ""
Aug 21 12:27:24.354: INFO: stdout: "true"
Aug 21 12:27:24.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7t2k6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5474'
Aug 21 12:27:25.577: INFO: stderr: ""
Aug 21 12:27:25.577: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 12:27:25.577: INFO: validating pod update-demo-nautilus-7t2k6
Aug 21 12:27:25.582: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 12:27:25.583: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 12:27:25.583: INFO: update-demo-nautilus-7t2k6 is verified up and running
STEP: using delete to clean up resources
Aug 21 12:27:25.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5474'
Aug 21 12:27:26.885: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 12:27:26.885: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 21 12:27:26.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5474'
Aug 21 12:27:28.464: INFO: stderr: "No resources found in kubectl-5474 namespace.\n"
Aug 21 12:27:28.464: INFO: stdout: ""
Aug 21 12:27:28.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5474 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 12:27:29.766: INFO: stderr: ""
Aug 21 12:27:29.766: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:27:29.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5474" for this suite.

• [SLOW TEST:19.832 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":100,"skipped":1661,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:27:29.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:27:34.818: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:27:36.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609654, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609654, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609654, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733609654, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:27:39.880: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 21 12:27:43.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config attach --namespace=webhook-6262 to-be-attached-pod -i -c=container1'
Aug 21 12:27:45.303: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:27:45.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6262" for this suite.
STEP: Destroying namespace "webhook-6262-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.884 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":101,"skipped":1739,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:27:45.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-h9bc
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 12:27:45.842: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-h9bc" in namespace "subpath-9971" to be "Succeeded or Failed"
Aug 21 12:27:45.923: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 81.123013ms
Aug 21 12:27:47.930: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08729402s
Aug 21 12:27:49.937: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094625248s
Aug 21 12:27:51.951: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 6.108612523s
Aug 21 12:27:53.958: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 8.115384524s
Aug 21 12:27:55.964: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 10.121755919s
Aug 21 12:27:57.972: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 12.129197387s
Aug 21 12:27:59.979: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 14.137096684s
Aug 21 12:28:01.988: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 16.14527995s
Aug 21 12:28:03.995: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 18.152754254s
Aug 21 12:28:06.002: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 20.159811105s
Aug 21 12:28:08.009: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 22.166322599s
Aug 21 12:28:10.030: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Running", Reason="", readiness=true. Elapsed: 24.187639021s
Aug 21 12:28:12.035: INFO: Pod "pod-subpath-test-projected-h9bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.193106106s
STEP: Saw pod success
Aug 21 12:28:12.036: INFO: Pod "pod-subpath-test-projected-h9bc" satisfied condition "Succeeded or Failed"
Aug 21 12:28:12.040: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-h9bc container test-container-subpath-projected-h9bc: 
STEP: delete the pod
Aug 21 12:28:12.121: INFO: Waiting for pod pod-subpath-test-projected-h9bc to disappear
Aug 21 12:28:12.124: INFO: Pod pod-subpath-test-projected-h9bc no longer exists
STEP: Deleting pod pod-subpath-test-projected-h9bc
Aug 21 12:28:12.124: INFO: Deleting pod "pod-subpath-test-projected-h9bc" in namespace "subpath-9971"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:28:12.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9971" for this suite.

• [SLOW TEST:26.470 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":102,"skipped":1741,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:28:12.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:28:12.315: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 12:28:12.350: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:12.369: INFO: Number of nodes with available pods: 0
Aug 21 12:28:12.369: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:28:13.377: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:13.381: INFO: Number of nodes with available pods: 0
Aug 21 12:28:13.381: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:28:14.514: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:14.687: INFO: Number of nodes with available pods: 0
Aug 21 12:28:14.687: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:28:15.427: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:15.432: INFO: Number of nodes with available pods: 0
Aug 21 12:28:15.432: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:28:16.379: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:16.406: INFO: Number of nodes with available pods: 1
Aug 21 12:28:16.406: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:28:17.392: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:17.400: INFO: Number of nodes with available pods: 2
Aug 21 12:28:17.401: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 21 12:28:17.457: INFO: Wrong image for pod: daemon-set-wndrx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:17.457: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:17.472: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:18.479: INFO: Wrong image for pod: daemon-set-wndrx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:18.479: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:18.488: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:19.479: INFO: Wrong image for pod: daemon-set-wndrx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:19.480: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:19.489: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:20.499: INFO: Wrong image for pod: daemon-set-wndrx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:20.499: INFO: Pod daemon-set-wndrx is not available
Aug 21 12:28:20.499: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:20.504: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:21.542: INFO: Pod daemon-set-7bxjr is not available
Aug 21 12:28:21.542: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:21.560: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:22.480: INFO: Pod daemon-set-7bxjr is not available
Aug 21 12:28:22.480: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:22.491: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:23.477: INFO: Pod daemon-set-7bxjr is not available
Aug 21 12:28:23.477: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:23.485: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:24.504: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:24.510: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:25.480: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:25.489: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:26.479: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:26.479: INFO: Pod daemon-set-zbzvh is not available
Aug 21 12:28:26.489: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:27.479: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:27.479: INFO: Pod daemon-set-zbzvh is not available
Aug 21 12:28:27.489: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:28.480: INFO: Wrong image for pod: daemon-set-zbzvh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 21 12:28:28.480: INFO: Pod daemon-set-zbzvh is not available
Aug 21 12:28:28.490: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:29.477: INFO: Pod daemon-set-62v8k is not available
Aug 21 12:28:29.484: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 21 12:28:29.491: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:29.496: INFO: Number of nodes with available pods: 1
Aug 21 12:28:29.496: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:28:30.506: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:30.511: INFO: Number of nodes with available pods: 1
Aug 21 12:28:30.511: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:28:31.630: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:31.666: INFO: Number of nodes with available pods: 1
Aug 21 12:28:31.666: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:28:32.508: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:32.515: INFO: Number of nodes with available pods: 1
Aug 21 12:28:32.516: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:28:33.506: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:28:33.528: INFO: Number of nodes with available pods: 2
Aug 21 12:28:33.528: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3797, will wait for the garbage collector to delete the pods
Aug 21 12:28:33.705: INFO: Deleting DaemonSet.extensions daemon-set took: 7.887252ms
Aug 21 12:28:35.906: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.201011743s
Aug 21 12:28:49.211: INFO: Number of nodes with available pods: 0
Aug 21 12:28:49.212: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 12:28:49.216: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3797/daemonsets","resourceVersion":"2116378"},"items":null}

Aug 21 12:28:49.220: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3797/pods","resourceVersion":"2116378"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:28:49.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3797" for this suite.

• [SLOW TEST:37.125 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":103,"skipped":1762,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:28:49.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:29:05.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9291" for this suite.

• [SLOW TEST:16.455 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":104,"skipped":1763,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:29:05.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-8175/secret-test-acfcf256-d1db-464e-ae02-6ba1d6d1fe67
STEP: Creating a pod to test consume secrets
Aug 21 12:29:05.876: INFO: Waiting up to 5m0s for pod "pod-configmaps-93025927-fc27-49d6-805d-8ab07a748aa3" in namespace "secrets-8175" to be "Succeeded or Failed"
Aug 21 12:29:05.950: INFO: Pod "pod-configmaps-93025927-fc27-49d6-805d-8ab07a748aa3": Phase="Pending", Reason="", readiness=false. Elapsed: 73.562815ms
Aug 21 12:29:07.956: INFO: Pod "pod-configmaps-93025927-fc27-49d6-805d-8ab07a748aa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07943609s
Aug 21 12:29:09.960: INFO: Pod "pod-configmaps-93025927-fc27-49d6-805d-8ab07a748aa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083976226s
STEP: Saw pod success
Aug 21 12:29:09.960: INFO: Pod "pod-configmaps-93025927-fc27-49d6-805d-8ab07a748aa3" satisfied condition "Succeeded or Failed"
Aug 21 12:29:09.964: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-93025927-fc27-49d6-805d-8ab07a748aa3 container env-test: 
STEP: delete the pod
Aug 21 12:29:10.063: INFO: Waiting for pod pod-configmaps-93025927-fc27-49d6-805d-8ab07a748aa3 to disappear
Aug 21 12:29:10.071: INFO: Pod pod-configmaps-93025927-fc27-49d6-805d-8ab07a748aa3 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:29:10.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8175" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1771,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:29:10.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 21 12:29:10.223: INFO: Waiting up to 5m0s for pod "pod-64b5694d-ee55-4f2f-91f0-711387ba0149" in namespace "emptydir-5159" to be "Succeeded or Failed"
Aug 21 12:29:10.227: INFO: Pod "pod-64b5694d-ee55-4f2f-91f0-711387ba0149": Phase="Pending", Reason="", readiness=false. Elapsed: 4.653274ms
Aug 21 12:29:12.233: INFO: Pod "pod-64b5694d-ee55-4f2f-91f0-711387ba0149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010069108s
Aug 21 12:29:14.239: INFO: Pod "pod-64b5694d-ee55-4f2f-91f0-711387ba0149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016014895s
STEP: Saw pod success
Aug 21 12:29:14.239: INFO: Pod "pod-64b5694d-ee55-4f2f-91f0-711387ba0149" satisfied condition "Succeeded or Failed"
Aug 21 12:29:14.243: INFO: Trying to get logs from node kali-worker pod pod-64b5694d-ee55-4f2f-91f0-711387ba0149 container test-container: 
STEP: delete the pod
Aug 21 12:29:14.351: INFO: Waiting for pod pod-64b5694d-ee55-4f2f-91f0-711387ba0149 to disappear
Aug 21 12:29:14.413: INFO: Pod pod-64b5694d-ee55-4f2f-91f0-711387ba0149 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:29:14.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5159" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1830,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:29:14.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:29:14.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9278" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":107,"skipped":1831,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:29:14.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 21 12:29:14.648: INFO: Waiting up to 5m0s for pod "pod-c5803aba-2032-4cea-a6d3-b057d043cbc4" in namespace "emptydir-3795" to be "Succeeded or Failed"
Aug 21 12:29:14.664: INFO: Pod "pod-c5803aba-2032-4cea-a6d3-b057d043cbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.327288ms
Aug 21 12:29:16.671: INFO: Pod "pod-c5803aba-2032-4cea-a6d3-b057d043cbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022878369s
Aug 21 12:29:18.676: INFO: Pod "pod-c5803aba-2032-4cea-a6d3-b057d043cbc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028007008s
STEP: Saw pod success
Aug 21 12:29:18.676: INFO: Pod "pod-c5803aba-2032-4cea-a6d3-b057d043cbc4" satisfied condition "Succeeded or Failed"
Aug 21 12:29:18.680: INFO: Trying to get logs from node kali-worker pod pod-c5803aba-2032-4cea-a6d3-b057d043cbc4 container test-container: 
STEP: delete the pod
Aug 21 12:29:18.715: INFO: Waiting for pod pod-c5803aba-2032-4cea-a6d3-b057d043cbc4 to disappear
Aug 21 12:29:18.738: INFO: Pod pod-c5803aba-2032-4cea-a6d3-b057d043cbc4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:29:18.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3795" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1836,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:29:18.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-90ab4c81-f414-4d15-b69f-a3f5dbc87301
STEP: Creating a pod to test consume secrets
Aug 21 12:29:18.813: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dde937ec-8a3c-4ac8-a136-8508bb5eaf38" in namespace "projected-4457" to be "Succeeded or Failed"
Aug 21 12:29:18.834: INFO: Pod "pod-projected-secrets-dde937ec-8a3c-4ac8-a136-8508bb5eaf38": Phase="Pending", Reason="", readiness=false. Elapsed: 19.943847ms
Aug 21 12:29:20.840: INFO: Pod "pod-projected-secrets-dde937ec-8a3c-4ac8-a136-8508bb5eaf38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02622757s
Aug 21 12:29:22.913: INFO: Pod "pod-projected-secrets-dde937ec-8a3c-4ac8-a136-8508bb5eaf38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099799405s
STEP: Saw pod success
Aug 21 12:29:22.914: INFO: Pod "pod-projected-secrets-dde937ec-8a3c-4ac8-a136-8508bb5eaf38" satisfied condition "Succeeded or Failed"
Aug 21 12:29:22.935: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-dde937ec-8a3c-4ac8-a136-8508bb5eaf38 container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 12:29:23.011: INFO: Waiting for pod pod-projected-secrets-dde937ec-8a3c-4ac8-a136-8508bb5eaf38 to disappear
Aug 21 12:29:23.116: INFO: Pod pod-projected-secrets-dde937ec-8a3c-4ac8-a136-8508bb5eaf38 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:29:23.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4457" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1863,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:29:23.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-2116750a-ce99-4b7b-aeb2-b6b45e93892b
STEP: Creating a pod to test consume configMaps
Aug 21 12:29:23.258: INFO: Waiting up to 5m0s for pod "pod-configmaps-a6a9d618-9bce-4602-8a39-3015f30ec908" in namespace "configmap-4490" to be "Succeeded or Failed"
Aug 21 12:29:23.271: INFO: Pod "pod-configmaps-a6a9d618-9bce-4602-8a39-3015f30ec908": Phase="Pending", Reason="", readiness=false. Elapsed: 12.865277ms
Aug 21 12:29:25.275: INFO: Pod "pod-configmaps-a6a9d618-9bce-4602-8a39-3015f30ec908": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01767568s
Aug 21 12:29:27.283: INFO: Pod "pod-configmaps-a6a9d618-9bce-4602-8a39-3015f30ec908": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02471359s
STEP: Saw pod success
Aug 21 12:29:27.283: INFO: Pod "pod-configmaps-a6a9d618-9bce-4602-8a39-3015f30ec908" satisfied condition "Succeeded or Failed"
Aug 21 12:29:27.287: INFO: Trying to get logs from node kali-worker pod pod-configmaps-a6a9d618-9bce-4602-8a39-3015f30ec908 container configmap-volume-test: 
STEP: delete the pod
Aug 21 12:29:27.540: INFO: Waiting for pod pod-configmaps-a6a9d618-9bce-4602-8a39-3015f30ec908 to disappear
Aug 21 12:29:27.565: INFO: Pod pod-configmaps-a6a9d618-9bce-4602-8a39-3015f30ec908 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:29:27.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4490" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1869,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:29:27.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:29:27.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 21 12:29:47.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1774 create -f -'
Aug 21 12:29:55.290: INFO: stderr: ""
Aug 21 12:29:55.290: INFO: stdout: "e2e-test-crd-publish-openapi-8928-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 21 12:29:55.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1774 delete e2e-test-crd-publish-openapi-8928-crds test-foo'
Aug 21 12:29:56.548: INFO: stderr: ""
Aug 21 12:29:56.548: INFO: stdout: "e2e-test-crd-publish-openapi-8928-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 21 12:29:56.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1774 apply -f -'
Aug 21 12:29:58.147: INFO: stderr: ""
Aug 21 12:29:58.147: INFO: stdout: "e2e-test-crd-publish-openapi-8928-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 21 12:29:58.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1774 delete e2e-test-crd-publish-openapi-8928-crds test-foo'
Aug 21 12:29:59.411: INFO: stderr: ""
Aug 21 12:29:59.411: INFO: stdout: "e2e-test-crd-publish-openapi-8928-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 21 12:29:59.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1774 create -f -'
Aug 21 12:30:00.901: INFO: rc: 1
Aug 21 12:30:00.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1774 apply -f -'
Aug 21 12:30:02.407: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 21 12:30:02.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1774 create -f -'
Aug 21 12:30:03.957: INFO: rc: 1
Aug 21 12:30:03.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1774 apply -f -'
Aug 21 12:30:05.607: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 21 12:30:05.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8928-crds'
Aug 21 12:30:07.149: INFO: stderr: ""
Aug 21 12:30:07.149: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8928-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 21 12:30:07.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8928-crds.metadata'
Aug 21 12:30:08.674: INFO: stderr: ""
Aug 21 12:30:08.675: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8928-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 21 12:30:08.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8928-crds.spec'
Aug 21 12:30:10.206: INFO: stderr: ""
Aug 21 12:30:10.207: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8928-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 21 12:30:10.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8928-crds.spec.bars'
Aug 21 12:30:11.730: INFO: stderr: ""
Aug 21 12:30:11.730: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8928-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 21 12:30:11.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8928-crds.spec.bars2'
Aug 21 12:30:13.284: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:30:32.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1774" for this suite.

• [SLOW TEST:65.218 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":111,"skipped":1887,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:30:32.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-437796cb-d9a4-4402-a2c7-a4c458aedd05
STEP: Creating a pod to test consume configMaps
Aug 21 12:30:32.920: INFO: Waiting up to 5m0s for pod "pod-configmaps-27a264c6-435f-4a93-8da7-b4b9798383a7" in namespace "configmap-1228" to be "Succeeded or Failed"
Aug 21 12:30:32.939: INFO: Pod "pod-configmaps-27a264c6-435f-4a93-8da7-b4b9798383a7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.552372ms
Aug 21 12:30:34.949: INFO: Pod "pod-configmaps-27a264c6-435f-4a93-8da7-b4b9798383a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028734455s
Aug 21 12:30:37.082: INFO: Pod "pod-configmaps-27a264c6-435f-4a93-8da7-b4b9798383a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16098025s
STEP: Saw pod success
Aug 21 12:30:37.082: INFO: Pod "pod-configmaps-27a264c6-435f-4a93-8da7-b4b9798383a7" satisfied condition "Succeeded or Failed"
Aug 21 12:30:37.088: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-27a264c6-435f-4a93-8da7-b4b9798383a7 container configmap-volume-test: 
STEP: delete the pod
Aug 21 12:30:37.112: INFO: Waiting for pod pod-configmaps-27a264c6-435f-4a93-8da7-b4b9798383a7 to disappear
Aug 21 12:30:37.183: INFO: Pod pod-configmaps-27a264c6-435f-4a93-8da7-b4b9798383a7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:30:37.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1228" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1906,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:30:37.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6694
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6694
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6694
Aug 21 12:30:37.319: INFO: Found 0 stateful pods, waiting for 1
Aug 21 12:30:47.327: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 21 12:30:47.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 12:30:48.879: INFO: stderr: "I0821 12:30:48.727572    1426 log.go:172] (0x4000aba000) (0x4000a74000) Create stream\nI0821 12:30:48.732165    1426 log.go:172] (0x4000aba000) (0x4000a74000) Stream added, broadcasting: 1\nI0821 12:30:48.742954    1426 log.go:172] (0x4000aba000) Reply frame received for 1\nI0821 12:30:48.743546    1426 log.go:172] (0x4000aba000) (0x40008bb360) Create stream\nI0821 12:30:48.743610    1426 log.go:172] (0x4000aba000) (0x40008bb360) Stream added, broadcasting: 3\nI0821 12:30:48.745191    1426 log.go:172] (0x4000aba000) Reply frame received for 3\nI0821 12:30:48.745560    1426 log.go:172] (0x4000aba000) (0x40008bb540) Create stream\nI0821 12:30:48.745642    1426 log.go:172] (0x4000aba000) (0x40008bb540) Stream added, broadcasting: 5\nI0821 12:30:48.747100    1426 log.go:172] (0x4000aba000) Reply frame received for 5\nI0821 12:30:48.832707    1426 log.go:172] (0x4000aba000) Data frame received for 5\nI0821 12:30:48.833048    1426 log.go:172] (0x40008bb540) (5) Data frame handling\nI0821 12:30:48.833596    1426 log.go:172] (0x40008bb540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 12:30:48.855483    1426 log.go:172] (0x4000aba000) Data frame received for 3\nI0821 12:30:48.855631    1426 log.go:172] (0x40008bb360) (3) Data frame handling\nI0821 12:30:48.855798    1426 log.go:172] (0x40008bb360) (3) Data frame sent\nI0821 12:30:48.856013    1426 log.go:172] (0x4000aba000) Data frame received for 3\nI0821 12:30:48.856175    1426 log.go:172] (0x40008bb360) (3) Data frame handling\nI0821 12:30:48.856484    1426 log.go:172] (0x4000aba000) Data frame received for 5\nI0821 12:30:48.856694    1426 log.go:172] (0x40008bb540) (5) Data frame handling\nI0821 12:30:48.857968    1426 log.go:172] (0x4000aba000) Data frame received for 1\nI0821 12:30:48.858037    1426 log.go:172] (0x4000a74000) (1) Data frame handling\nI0821 12:30:48.858107    1426 log.go:172] (0x4000a74000) (1) Data frame sent\nI0821 12:30:48.859395    1426 log.go:172] (0x4000aba000) (0x4000a74000) Stream removed, broadcasting: 1\nI0821 12:30:48.862923    1426 log.go:172] (0x4000aba000) Go away received\nI0821 12:30:48.867769    1426 log.go:172] (0x4000aba000) (0x4000a74000) Stream removed, broadcasting: 1\nI0821 12:30:48.868294    1426 log.go:172] (0x4000aba000) (0x40008bb360) Stream removed, broadcasting: 3\nI0821 12:30:48.868702    1426 log.go:172] (0x4000aba000) (0x40008bb540) Stream removed, broadcasting: 5\n"
Aug 21 12:30:48.880: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 12:30:48.881: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 12:30:48.893: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 12:30:48.893: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 12:30:49.024: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999994063s
Aug 21 12:30:50.031: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.92969975s
Aug 21 12:30:51.064: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.922356741s
Aug 21 12:30:52.071: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.889749898s
Aug 21 12:30:53.136: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.882352556s
Aug 21 12:30:54.145: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.81741339s
Aug 21 12:30:55.208: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.808943257s
Aug 21 12:30:56.461: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.745748556s
Aug 21 12:30:57.568: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.492990253s
Aug 21 12:30:58.576: INFO: Verifying statefulset ss doesn't scale past 1 for another 385.938484ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6694
Aug 21 12:30:59.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:31:01.050: INFO: stderr: "I0821 12:31:00.958049    1450 log.go:172] (0x4000a88420) (0x400073a1e0) Create stream\nI0821 12:31:00.963405    1450 log.go:172] (0x4000a88420) (0x400073a1e0) Stream added, broadcasting: 1\nI0821 12:31:00.974932    1450 log.go:172] (0x4000a88420) Reply frame received for 1\nI0821 12:31:00.975619    1450 log.go:172] (0x4000a88420) (0x400079a000) Create stream\nI0821 12:31:00.975685    1450 log.go:172] (0x4000a88420) (0x400079a000) Stream added, broadcasting: 3\nI0821 12:31:00.977551    1450 log.go:172] (0x4000a88420) Reply frame received for 3\nI0821 12:31:00.977905    1450 log.go:172] (0x4000a88420) (0x400079a0a0) Create stream\nI0821 12:31:00.977996    1450 log.go:172] (0x4000a88420) (0x400079a0a0) Stream added, broadcasting: 5\nI0821 12:31:00.979695    1450 log.go:172] (0x4000a88420) Reply frame received for 5\nI0821 12:31:01.029026    1450 log.go:172] (0x4000a88420) Data frame received for 5\nI0821 12:31:01.029436    1450 log.go:172] (0x400079a0a0) (5) Data frame handling\nI0821 12:31:01.030028    1450 log.go:172] (0x4000a88420) Data frame received for 3\nI0821 12:31:01.030163    1450 log.go:172] (0x400079a000) (3) Data frame handling\nI0821 12:31:01.030262    1450 log.go:172] (0x400079a0a0) (5) Data frame sent\nI0821 12:31:01.030367    1450 log.go:172] (0x400079a000) (3) Data frame sent\nI0821 12:31:01.030616    1450 log.go:172] (0x4000a88420) Data frame received for 1\nI0821 12:31:01.030747    1450 log.go:172] (0x400073a1e0) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 12:31:01.030834    1450 log.go:172] (0x4000a88420) Data frame received for 3\nI0821 12:31:01.030944    1450 log.go:172] (0x400079a000) (3) Data frame handling\nI0821 12:31:01.031018    1450 log.go:172] (0x4000a88420) Data frame received for 5\nI0821 12:31:01.031109    1450 log.go:172] (0x400079a0a0) (5) Data frame handling\nI0821 12:31:01.031178    1450 log.go:172] (0x400073a1e0) (1) Data frame sent\nI0821 12:31:01.033317    1450 log.go:172] (0x4000a88420) (0x400073a1e0) Stream removed, broadcasting: 1\nI0821 12:31:01.035799    1450 log.go:172] (0x4000a88420) Go away received\nI0821 12:31:01.039024    1450 log.go:172] (0x4000a88420) (0x400073a1e0) Stream removed, broadcasting: 1\nI0821 12:31:01.039624    1450 log.go:172] (0x4000a88420) (0x400079a000) Stream removed, broadcasting: 3\nI0821 12:31:01.039958    1450 log.go:172] (0x4000a88420) (0x400079a0a0) Stream removed, broadcasting: 5\n"
Aug 21 12:31:01.051: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 12:31:01.051: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 12:31:01.058: INFO: Found 1 stateful pods, waiting for 3
Aug 21 12:31:11.066: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 12:31:11.066: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 12:31:11.066: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 21 12:31:11.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 12:31:12.525: INFO: stderr: "I0821 12:31:12.422668    1472 log.go:172] (0x400003a2c0) (0x4000a2c000) Create stream\nI0821 12:31:12.425088    1472 log.go:172] (0x400003a2c0) (0x4000a2c000) Stream added, broadcasting: 1\nI0821 12:31:12.434485    1472 log.go:172] (0x400003a2c0) Reply frame received for 1\nI0821 12:31:12.435323    1472 log.go:172] (0x400003a2c0) (0x40007e92c0) Create stream\nI0821 12:31:12.435430    1472 log.go:172] (0x400003a2c0) (0x40007e92c0) Stream added, broadcasting: 3\nI0821 12:31:12.436974    1472 log.go:172] (0x400003a2c0) Reply frame received for 3\nI0821 12:31:12.437238    1472 log.go:172] (0x400003a2c0) (0x4000a2c0a0) Create stream\nI0821 12:31:12.437299    1472 log.go:172] (0x400003a2c0) (0x4000a2c0a0) Stream added, broadcasting: 5\nI0821 12:31:12.438451    1472 log.go:172] (0x400003a2c0) Reply frame received for 5\nI0821 12:31:12.505392    1472 log.go:172] (0x400003a2c0) Data frame received for 3\nI0821 12:31:12.506024    1472 log.go:172] (0x400003a2c0) Data frame received for 1\nI0821 12:31:12.506187    1472 log.go:172] (0x4000a2c000) (1) Data frame handling\nI0821 12:31:12.506467    1472 log.go:172] (0x400003a2c0) Data frame received for 5\nI0821 12:31:12.506553    1472 log.go:172] (0x4000a2c0a0) (5) Data frame handling\nI0821 12:31:12.506730    1472 log.go:172] (0x40007e92c0) (3) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 12:31:12.508139    1472 log.go:172] (0x4000a2c0a0) (5) Data frame sent\nI0821 12:31:12.508472    1472 log.go:172] (0x40007e92c0) (3) Data frame sent\nI0821 12:31:12.508666    1472 log.go:172] (0x4000a2c000) (1) Data frame sent\nI0821 12:31:12.509268    1472 log.go:172] (0x400003a2c0) Data frame received for 3\nI0821 12:31:12.509587    1472 log.go:172] (0x400003a2c0) Data frame received for 5\nI0821 12:31:12.510056    1472 log.go:172] (0x400003a2c0) (0x4000a2c000) Stream removed, broadcasting: 1\nI0821 12:31:12.510639    1472 log.go:172] (0x40007e92c0) (3) Data frame handling\nI0821 12:31:12.511169    1472 log.go:172] (0x4000a2c0a0) (5) Data frame handling\nI0821 12:31:12.512981    1472 log.go:172] (0x400003a2c0) Go away received\nI0821 12:31:12.515816    1472 log.go:172] (0x400003a2c0) (0x4000a2c000) Stream removed, broadcasting: 1\nI0821 12:31:12.516159    1472 log.go:172] (0x400003a2c0) (0x40007e92c0) Stream removed, broadcasting: 3\nI0821 12:31:12.516402    1472 log.go:172] (0x400003a2c0) (0x4000a2c0a0) Stream removed, broadcasting: 5\n"
Aug 21 12:31:12.526: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 12:31:12.526: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 12:31:12.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 12:31:14.036: INFO: stderr: "I0821 12:31:13.872680    1497 log.go:172] (0x40009fa000) (0x4000813540) Create stream\nI0821 12:31:13.878782    1497 log.go:172] (0x40009fa000) (0x4000813540) Stream added, broadcasting: 1\nI0821 12:31:13.891520    1497 log.go:172] (0x40009fa000) Reply frame received for 1\nI0821 12:31:13.892025    1497 log.go:172] (0x40009fa000) (0x40008135e0) Create stream\nI0821 12:31:13.892075    1497 log.go:172] (0x40009fa000) (0x40008135e0) Stream added, broadcasting: 3\nI0821 12:31:13.893320    1497 log.go:172] (0x40009fa000) Reply frame received for 3\nI0821 12:31:13.893675    1497 log.go:172] (0x40009fa000) (0x4000708140) Create stream\nI0821 12:31:13.893754    1497 log.go:172] (0x40009fa000) (0x4000708140) Stream added, broadcasting: 5\nI0821 12:31:13.895174    1497 log.go:172] (0x40009fa000) Reply frame received for 5\nI0821 12:31:13.986830    1497 log.go:172] (0x40009fa000) Data frame received for 5\nI0821 12:31:13.987035    1497 log.go:172] (0x4000708140) (5) Data frame handling\nI0821 12:31:13.987372    1497 log.go:172] (0x4000708140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 12:31:14.017098    1497 log.go:172] (0x40009fa000) Data frame received for 3\nI0821 12:31:14.017261    1497 log.go:172] (0x40009fa000) Data frame received for 5\nI0821 12:31:14.017416    1497 log.go:172] (0x4000708140) (5) Data frame handling\nI0821 12:31:14.017982    1497 log.go:172] (0x40008135e0) (3) Data frame handling\nI0821 12:31:14.018102    1497 log.go:172] (0x40008135e0) (3) Data frame sent\nI0821 12:31:14.018224    1497 log.go:172] (0x40009fa000) Data frame received for 3\nI0821 12:31:14.018311    1497 log.go:172] (0x40008135e0) (3) Data frame handling\nI0821 12:31:14.019045    1497 log.go:172] (0x40009fa000) Data frame received for 1\nI0821 12:31:14.019139    1497 log.go:172] (0x4000813540) (1) Data frame handling\nI0821 12:31:14.019225    1497 log.go:172] (0x4000813540) (1) Data frame sent\nI0821 12:31:14.020888    1497 log.go:172] (0x40009fa000) (0x4000813540) Stream removed, broadcasting: 1\nI0821 12:31:14.023560    1497 log.go:172] (0x40009fa000) Go away received\nI0821 12:31:14.026996    1497 log.go:172] (0x40009fa000) (0x4000813540) Stream removed, broadcasting: 1\nI0821 12:31:14.027351    1497 log.go:172] (0x40009fa000) (0x40008135e0) Stream removed, broadcasting: 3\nI0821 12:31:14.027567    1497 log.go:172] (0x40009fa000) (0x4000708140) Stream removed, broadcasting: 5\n"
Aug 21 12:31:14.037: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 12:31:14.037: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 12:31:14.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 12:31:15.537: INFO: stderr: "I0821 12:31:15.385782    1519 log.go:172] (0x4000a2e000) (0x4000718140) Create stream\nI0821 12:31:15.390889    1519 log.go:172] (0x4000a2e000) (0x4000718140) Stream added, broadcasting: 1\nI0821 12:31:15.404579    1519 log.go:172] (0x4000a2e000) Reply frame received for 1\nI0821 12:31:15.405689    1519 log.go:172] (0x4000a2e000) (0x40007181e0) Create stream\nI0821 12:31:15.405789    1519 log.go:172] (0x4000a2e000) (0x40007181e0) Stream added, broadcasting: 3\nI0821 12:31:15.407314    1519 log.go:172] (0x4000a2e000) Reply frame received for 3\nI0821 12:31:15.407637    1519 log.go:172] (0x4000a2e000) (0x4000742140) Create stream\nI0821 12:31:15.407734    1519 log.go:172] (0x4000a2e000) (0x4000742140) Stream added, broadcasting: 5\nI0821 12:31:15.408968    1519 log.go:172] (0x4000a2e000) Reply frame received for 5\nI0821 12:31:15.470873    1519 log.go:172] (0x4000a2e000) Data frame received for 5\nI0821 12:31:15.471412    1519 log.go:172] (0x4000742140) (5) Data frame handling\nI0821 12:31:15.472579    1519 log.go:172] (0x4000742140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 12:31:15.510467    1519 log.go:172] (0x4000a2e000) Data frame received for 3\nI0821 12:31:15.510588    1519 log.go:172] (0x40007181e0) (3) Data frame handling\nI0821 12:31:15.510680    1519 log.go:172] (0x40007181e0) (3) Data frame sent\nI0821 12:31:15.515757    1519 log.go:172] (0x4000a2e000) Data frame received for 3\nI0821 12:31:15.515975    1519 log.go:172] (0x40007181e0) (3) Data frame handling\nI0821 12:31:15.516192    1519 log.go:172] (0x4000a2e000) Data frame received for 5\nI0821 12:31:15.516342    1519 log.go:172] (0x4000742140) (5) Data frame handling\nI0821 12:31:15.517768    1519 log.go:172] (0x4000a2e000) Data frame received for 1\nI0821 12:31:15.517851    1519 log.go:172] (0x4000718140) (1) Data frame handling\nI0821 12:31:15.517924    1519 log.go:172] (0x4000718140) (1) Data frame sent\nI0821 12:31:15.518726    1519 log.go:172] (0x4000a2e000) (0x4000718140) Stream removed, broadcasting: 1\nI0821 12:31:15.521656    1519 log.go:172] (0x4000a2e000) Go away received\nI0821 12:31:15.525134    1519 log.go:172] (0x4000a2e000) (0x4000718140) Stream removed, broadcasting: 1\nI0821 12:31:15.525421    1519 log.go:172] (0x4000a2e000) (0x40007181e0) Stream removed, broadcasting: 3\nI0821 12:31:15.525657    1519 log.go:172] (0x4000a2e000) (0x4000742140) Stream removed, broadcasting: 5\n"
Aug 21 12:31:15.538: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 12:31:15.538: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 12:31:15.538: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 12:31:15.543: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 21 12:31:25.554: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 12:31:25.554: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 12:31:25.554: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 12:31:25.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999592s
Aug 21 12:31:26.583: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989015775s
Aug 21 12:31:27.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98011512s
Aug 21 12:31:28.601: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.971979614s
Aug 21 12:31:29.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.962545774s
Aug 21 12:31:30.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954534735s
Aug 21 12:31:31.767: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.910475219s
Aug 21 12:31:33.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.796487074s
Aug 21 12:31:34.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.122926168s
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6694
Aug 21 12:31:35.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:31:37.182: INFO: stderr: "I0821 12:31:37.060829    1542 log.go:172] (0x40006ca000) (0x40007dd540) Create stream\nI0821 12:31:37.065690    1542 log.go:172] (0x40006ca000) (0x40007dd540) Stream added, broadcasting: 1\nI0821 12:31:37.075290    1542 log.go:172] (0x40006ca000) Reply frame received for 1\nI0821 12:31:37.076097    1542 log.go:172] (0x40006ca000) (0x4000742000) Create stream\nI0821 12:31:37.076180    1542 log.go:172] (0x40006ca000) (0x4000742000) Stream added, broadcasting: 3\nI0821 12:31:37.077557    1542 log.go:172] (0x40006ca000) Reply frame received for 3\nI0821 12:31:37.077785    1542 log.go:172] (0x40006ca000) (0x4000752000) Create stream\nI0821 12:31:37.077843    1542 log.go:172] (0x40006ca000) (0x4000752000) Stream added, broadcasting: 5\nI0821 12:31:37.079337    1542 log.go:172] (0x40006ca000) Reply frame received for 5\nI0821 12:31:37.166532    1542 log.go:172] (0x40006ca000) Data frame received for 3\nI0821 12:31:37.166811    1542 log.go:172] (0x4000742000) (3) Data frame handling\nI0821 12:31:37.166963    1542 log.go:172] (0x40006ca000) Data frame received for 5\nI0821 12:31:37.167059    1542 log.go:172] (0x40006ca000) Data frame received for 1\nI0821 12:31:37.167144    1542 log.go:172] (0x40007dd540) (1) Data frame handling\nI0821 12:31:37.167211    1542 log.go:172] (0x4000752000) (5) Data frame handling\nI0821 12:31:37.167270    1542 log.go:172] (0x4000742000) (3) Data frame sent\nI0821 12:31:37.167340    1542 log.go:172] (0x40007dd540) (1) Data frame sent\nI0821 12:31:37.167547    1542 log.go:172] (0x4000752000) (5) Data frame sent\nI0821 12:31:37.167794    1542 log.go:172] (0x40006ca000) Data frame received for 3\nI0821 12:31:37.167854    1542 log.go:172] (0x4000742000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 12:31:37.168508    1542 log.go:172] (0x40006ca000) Data frame received for 5\nI0821 12:31:37.168610    1542 log.go:172] (0x4000752000) (5) Data frame handling\nI0821 12:31:37.170362    1542 log.go:172] (0x40006ca000) (0x40007dd540) Stream removed, broadcasting: 1\nI0821 12:31:37.172483    1542 log.go:172] (0x40006ca000) Go away received\nI0821 12:31:37.173751    1542 log.go:172] (0x40006ca000) (0x40007dd540) Stream removed, broadcasting: 1\nI0821 12:31:37.174184    1542 log.go:172] (0x40006ca000) (0x4000742000) Stream removed, broadcasting: 3\nI0821 12:31:37.174559    1542 log.go:172] (0x40006ca000) (0x4000752000) Stream removed, broadcasting: 5\n"
Aug 21 12:31:37.183: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 12:31:37.183: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 12:31:37.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:31:39.122: INFO: stderr: "I0821 12:31:39.016946    1564 log.go:172] (0x4000a38c60) (0x4000a82000) Create stream\nI0821 12:31:39.023799    1564 log.go:172] (0x4000a38c60) (0x4000a82000) Stream added, broadcasting: 1\nI0821 12:31:39.039364    1564 log.go:172] (0x4000a38c60) Reply frame received for 1\nI0821 12:31:39.039971    1564 log.go:172] (0x4000a38c60) (0x40007ff360) Create stream\nI0821 12:31:39.040038    1564 log.go:172] (0x4000a38c60) (0x40007ff360) Stream added, broadcasting: 3\nI0821 12:31:39.042262    1564 log.go:172] (0x4000a38c60) Reply frame received for 3\nI0821 12:31:39.042683    1564 log.go:172] (0x4000a38c60) (0x4000a820a0) Create stream\nI0821 12:31:39.042766    1564 log.go:172] (0x4000a38c60) (0x4000a820a0) Stream added, broadcasting: 5\nI0821 12:31:39.044168    1564 log.go:172] (0x4000a38c60) Reply frame received for 5\nI0821 12:31:39.107068    1564 log.go:172] (0x4000a38c60) Data frame received for 5\nI0821 12:31:39.107271    1564 log.go:172] (0x4000a820a0) (5) Data frame handling\nI0821 12:31:39.107433    1564 log.go:172] (0x4000a38c60) Data frame received for 3\nI0821 12:31:39.107509    1564 log.go:172] (0x40007ff360) (3) Data frame handling\nI0821 12:31:39.107711    1564 log.go:172] (0x4000a820a0) (5) Data frame sent\nI0821 12:31:39.108483    1564 log.go:172] (0x4000a38c60) Data frame received for 1\nI0821 12:31:39.108587    1564 log.go:172] (0x4000a82000) (1) Data frame handling\nI0821 12:31:39.108916    1564 log.go:172] (0x4000a82000) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 12:31:39.109011    1564 log.go:172] (0x40007ff360) (3) Data frame sent\nI0821 12:31:39.109085    1564 log.go:172] (0x4000a38c60) Data frame received for 3\nI0821 12:31:39.109214    1564 log.go:172] (0x4000a38c60) Data frame received for 5\nI0821 12:31:39.109366    1564 log.go:172] (0x4000a820a0) (5) Data frame handling\nI0821 12:31:39.109482    1564 log.go:172] (0x40007ff360) (3) Data frame handling\nI0821 12:31:39.111704    1564 log.go:172] (0x4000a38c60) (0x4000a82000) Stream removed, broadcasting: 1\nI0821 12:31:39.113678    1564 log.go:172] (0x4000a38c60) Go away received\nI0821 12:31:39.116820    1564 log.go:172] (0x4000a38c60) (0x4000a82000) Stream removed, broadcasting: 1\nI0821 12:31:39.117209    1564 log.go:172] (0x4000a38c60) (0x40007ff360) Stream removed, broadcasting: 3\nI0821 12:31:39.117365    1564 log.go:172] (0x4000a38c60) (0x4000a820a0) Stream removed, broadcasting: 5\n"
Aug 21 12:31:39.123: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 12:31:39.124: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 12:31:39.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:31:40.801: INFO: rc: 1
Aug 21 12:31:40.802: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
I0821 12:31:40.728846    1587 log.go:172] (0x4000a16d10) (0x400097c280) Create stream
I0821 12:31:40.732061    1587 log.go:172] (0x4000a16d10) (0x400097c280) Stream added, broadcasting: 1
I0821 12:31:40.741993    1587 log.go:172] (0x4000a16d10) Reply frame received for 1
I0821 12:31:40.742527    1587 log.go:172] (0x4000a16d10) (0x40007d74a0) Create stream
I0821 12:31:40.742584    1587 log.go:172] (0x4000a16d10) (0x40007d74a0) Stream added, broadcasting: 3
I0821 12:31:40.743757    1587 log.go:172] (0x4000a16d10) Reply frame received for 3
I0821 12:31:40.744023    1587 log.go:172] (0x4000a16d10) (0x400080f0e0) Create stream
I0821 12:31:40.744087    1587 log.go:172] (0x4000a16d10) (0x400080f0e0) Stream added, broadcasting: 5
I0821 12:31:40.745300    1587 log.go:172] (0x4000a16d10) Reply frame received for 5
I0821 12:31:40.776334    1587 log.go:172] (0x4000a16d10) Data frame received for 1
I0821 12:31:40.776621    1587 log.go:172] (0x400097c280) (1) Data frame handling
I0821 12:31:40.777874    1587 log.go:172] (0x400097c280) (1) Data frame sent
I0821 12:31:40.779082    1587 log.go:172] (0x4000a16d10) (0x400097c280) Stream removed, broadcasting: 1
I0821 12:31:40.783182    1587 log.go:172] (0x4000a16d10) (0x40007d74a0) Stream removed, broadcasting: 3
I0821 12:31:40.784961    1587 log.go:172] (0x4000a16d10) (0x400080f0e0) Stream removed, broadcasting: 5
I0821 12:31:40.785307    1587 log.go:172] (0x4000a16d10) Go away received
I0821 12:31:40.788657    1587 log.go:172] (0x4000a16d10) (0x400097c280) Stream removed, broadcasting: 1
I0821 12:31:40.789286    1587 log.go:172] (0x4000a16d10) (0x40007d74a0) Stream removed, broadcasting: 3
I0821 12:31:40.789393    1587 log.go:172] (0x4000a16d10) (0x400080f0e0) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "a7729ce41b7bd717fb669334b7a1d7a1c6bca0ba48877d2f69bbc6640f3d485d": task 5e4be2f6856fa845ea548fb21a3ff7bc50fc07737a310edb2ae2c1df0515f8d7 not found: not found

error:
exit status 1
Aug 21 12:31:50.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:31:52.065: INFO: rc: 1
Aug 21 12:31:52.065: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:32:02.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:32:03.298: INFO: rc: 1
Aug 21 12:32:03.298: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:32:13.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:32:14.571: INFO: rc: 1
Aug 21 12:32:14.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:32:24.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:32:25.819: INFO: rc: 1
Aug 21 12:32:25.819: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:32:35.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:32:37.057: INFO: rc: 1
Aug 21 12:32:37.058: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:32:47.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:32:48.313: INFO: rc: 1
Aug 21 12:32:48.314: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:32:58.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:32:59.557: INFO: rc: 1
Aug 21 12:32:59.557: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:33:09.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:33:10.778: INFO: rc: 1
Aug 21 12:33:10.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:33:20.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:33:22.024: INFO: rc: 1
Aug 21 12:33:22.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:33:32.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:33:33.257: INFO: rc: 1
Aug 21 12:33:33.258: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:33:43.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:33:44.516: INFO: rc: 1
Aug 21 12:33:44.517: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:33:54.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:33:55.733: INFO: rc: 1
Aug 21 12:33:55.734: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:34:05.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:34:07.318: INFO: rc: 1
Aug 21 12:34:07.318: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:34:17.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:34:18.990: INFO: rc: 1
Aug 21 12:34:18.990: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:34:28.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:34:30.246: INFO: rc: 1
Aug 21 12:34:30.247: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:34:40.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:34:41.509: INFO: rc: 1
Aug 21 12:34:41.509: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:34:51.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:34:52.770: INFO: rc: 1
Aug 21 12:34:52.770: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:35:02.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:35:04.036: INFO: rc: 1
Aug 21 12:35:04.036: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:35:14.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:35:15.277: INFO: rc: 1
Aug 21 12:35:15.277: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:35:25.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:35:26.498: INFO: rc: 1
Aug 21 12:35:26.498: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:35:36.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:35:38.532: INFO: rc: 1
Aug 21 12:35:38.532: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:35:48.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:35:49.768: INFO: rc: 1
Aug 21 12:35:49.769: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:35:59.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:36:01.009: INFO: rc: 1
Aug 21 12:36:01.009: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:36:11.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:36:12.228: INFO: rc: 1
Aug 21 12:36:12.228: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:36:22.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:36:23.492: INFO: rc: 1
Aug 21 12:36:23.492: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:36:33.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:36:34.999: INFO: rc: 1
Aug 21 12:36:34.999: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 21 12:36:45.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6694 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 12:36:46.254: INFO: rc: 1
Aug 21 12:36:46.254: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Aug 21 12:36:46.254: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 21 12:36:46.283: INFO: Deleting all statefulset in ns statefulset-6694
Aug 21 12:36:46.286: INFO: Scaling statefulset ss to 0
Aug 21 12:36:46.299: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 12:36:46.302: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:36:46.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6694" for this suite.

• [SLOW TEST:369.345 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":113,"skipped":1932,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:36:46.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5878
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-5878
Aug 21 12:36:46.835: INFO: Found 0 stateful pods, waiting for 1
Aug 21 12:36:56.844: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 21 12:36:56.923: INFO: Deleting all statefulset in ns statefulset-5878
Aug 21 12:36:56.994: INFO: Scaling statefulset ss to 0
Aug 21 12:37:17.605: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 12:37:17.610: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:37:17.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5878" for this suite.

• [SLOW TEST:31.093 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":114,"skipped":1957,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:37:17.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 21 12:37:17.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 21 12:38:16.850: INFO: >>> kubeConfig: /root/.kube/config
Aug 21 12:38:27.039: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:39:36.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1253" for this suite.

• [SLOW TEST:138.918 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":115,"skipped":1963,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:39:36.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 21 12:39:46.794: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:46.795: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:46.868002      10 log.go:172] (0x40054e42c0) (0x4000859d60) Create stream
I0821 12:39:46.868257      10 log.go:172] (0x40054e42c0) (0x4000859d60) Stream added, broadcasting: 1
I0821 12:39:46.876329      10 log.go:172] (0x40054e42c0) Reply frame received for 1
I0821 12:39:46.876560      10 log.go:172] (0x40054e42c0) (0x4000127cc0) Create stream
I0821 12:39:46.876685      10 log.go:172] (0x40054e42c0) (0x4000127cc0) Stream added, broadcasting: 3
I0821 12:39:46.878192      10 log.go:172] (0x40054e42c0) Reply frame received for 3
I0821 12:39:46.878337      10 log.go:172] (0x40054e42c0) (0x4002f80000) Create stream
I0821 12:39:46.878404      10 log.go:172] (0x40054e42c0) (0x4002f80000) Stream added, broadcasting: 5
I0821 12:39:46.879523      10 log.go:172] (0x40054e42c0) Reply frame received for 5
I0821 12:39:46.941805      10 log.go:172] (0x40054e42c0) Data frame received for 5
I0821 12:39:46.942034      10 log.go:172] (0x4002f80000) (5) Data frame handling
I0821 12:39:46.942271      10 log.go:172] (0x40054e42c0) Data frame received for 3
I0821 12:39:46.942405      10 log.go:172] (0x4000127cc0) (3) Data frame handling
I0821 12:39:46.942533      10 log.go:172] (0x4000127cc0) (3) Data frame sent
I0821 12:39:46.942618      10 log.go:172] (0x40054e42c0) Data frame received for 3
I0821 12:39:46.942689      10 log.go:172] (0x4000127cc0) (3) Data frame handling
I0821 12:39:46.943279      10 log.go:172] (0x40054e42c0) Data frame received for 1
I0821 12:39:46.943391      10 log.go:172] (0x4000859d60) (1) Data frame handling
I0821 12:39:46.943524      10 log.go:172] (0x4000859d60) (1) Data frame sent
I0821 12:39:46.943678      10 log.go:172] (0x40054e42c0) (0x4000859d60) Stream removed, broadcasting: 1
I0821 12:39:46.943845      10 log.go:172] (0x40054e42c0) Go away received
I0821 12:39:46.944178      10 log.go:172] (0x40054e42c0) (0x4000859d60) Stream removed, broadcasting: 1
I0821 12:39:46.944304      10 log.go:172] (0x40054e42c0) (0x4000127cc0) Stream removed, broadcasting: 3
I0821 12:39:46.944419      10 log.go:172] (0x40054e42c0) (0x4002f80000) Stream removed, broadcasting: 5
Aug 21 12:39:46.944: INFO: Exec stderr: ""
Aug 21 12:39:46.945: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:46.945: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:47.012603      10 log.go:172] (0x4005558370) (0x400247e640) Create stream
I0821 12:39:47.012813      10 log.go:172] (0x4005558370) (0x400247e640) Stream added, broadcasting: 1
I0821 12:39:47.016474      10 log.go:172] (0x4005558370) Reply frame received for 1
I0821 12:39:47.016708      10 log.go:172] (0x4005558370) (0x400247e780) Create stream
I0821 12:39:47.016876      10 log.go:172] (0x4005558370) (0x400247e780) Stream added, broadcasting: 3
I0821 12:39:47.018781      10 log.go:172] (0x4005558370) Reply frame received for 3
I0821 12:39:47.018946      10 log.go:172] (0x4005558370) (0x40009215e0) Create stream
I0821 12:39:47.019035      10 log.go:172] (0x4005558370) (0x40009215e0) Stream added, broadcasting: 5
I0821 12:39:47.020866      10 log.go:172] (0x4005558370) Reply frame received for 5
I0821 12:39:47.084279      10 log.go:172] (0x4005558370) Data frame received for 3
I0821 12:39:47.084516      10 log.go:172] (0x400247e780) (3) Data frame handling
I0821 12:39:47.084858      10 log.go:172] (0x4005558370) Data frame received for 5
I0821 12:39:47.085195      10 log.go:172] (0x40009215e0) (5) Data frame handling
I0821 12:39:47.085394      10 log.go:172] (0x400247e780) (3) Data frame sent
I0821 12:39:47.085542      10 log.go:172] (0x4005558370) Data frame received for 3
I0821 12:39:47.085727      10 log.go:172] (0x400247e780) (3) Data frame handling
I0821 12:39:47.086052      10 log.go:172] (0x4005558370) Data frame received for 1
I0821 12:39:47.086143      10 log.go:172] (0x400247e640) (1) Data frame handling
I0821 12:39:47.086219      10 log.go:172] (0x400247e640) (1) Data frame sent
I0821 12:39:47.086313      10 log.go:172] (0x4005558370) (0x400247e640) Stream removed, broadcasting: 1
I0821 12:39:47.086417      10 log.go:172] (0x4005558370) Go away received
I0821 12:39:47.086911      10 log.go:172] (0x4005558370) (0x400247e640) Stream removed, broadcasting: 1
I0821 12:39:47.087059      10 log.go:172] (0x4005558370) (0x400247e780) Stream removed, broadcasting: 3
I0821 12:39:47.087194      10 log.go:172] (0x4005558370) (0x40009215e0) Stream removed, broadcasting: 5
Aug 21 12:39:47.087: INFO: Exec stderr: ""
Aug 21 12:39:47.087: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:47.087: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:47.147975      10 log.go:172] (0x4004a48fd0) (0x40015b92c0) Create stream
I0821 12:39:47.148159      10 log.go:172] (0x4004a48fd0) (0x40015b92c0) Stream added, broadcasting: 1
I0821 12:39:47.152579      10 log.go:172] (0x4004a48fd0) Reply frame received for 1
I0821 12:39:47.152987      10 log.go:172] (0x4004a48fd0) (0x400247e8c0) Create stream
I0821 12:39:47.153163      10 log.go:172] (0x4004a48fd0) (0x400247e8c0) Stream added, broadcasting: 3
I0821 12:39:47.154867      10 log.go:172] (0x4004a48fd0) Reply frame received for 3
I0821 12:39:47.155034      10 log.go:172] (0x4004a48fd0) (0x40015b9360) Create stream
I0821 12:39:47.155114      10 log.go:172] (0x4004a48fd0) (0x40015b9360) Stream added, broadcasting: 5
I0821 12:39:47.156595      10 log.go:172] (0x4004a48fd0) Reply frame received for 5
I0821 12:39:47.236372      10 log.go:172] (0x4004a48fd0) Data frame received for 5
I0821 12:39:47.236515      10 log.go:172] (0x40015b9360) (5) Data frame handling
I0821 12:39:47.236629      10 log.go:172] (0x4004a48fd0) Data frame received for 3
I0821 12:39:47.236810      10 log.go:172] (0x400247e8c0) (3) Data frame handling
I0821 12:39:47.236920      10 log.go:172] (0x400247e8c0) (3) Data frame sent
I0821 12:39:47.236998      10 log.go:172] (0x4004a48fd0) Data frame received for 3
I0821 12:39:47.237069      10 log.go:172] (0x400247e8c0) (3) Data frame handling
I0821 12:39:47.237644      10 log.go:172] (0x4004a48fd0) Data frame received for 1
I0821 12:39:47.237733      10 log.go:172] (0x40015b92c0) (1) Data frame handling
I0821 12:39:47.237801      10 log.go:172] (0x40015b92c0) (1) Data frame sent
I0821 12:39:47.237947      10 log.go:172] (0x4004a48fd0) (0x40015b92c0) Stream removed, broadcasting: 1
I0821 12:39:47.238038      10 log.go:172] (0x4004a48fd0) Go away received
I0821 12:39:47.238404      10 log.go:172] (0x4004a48fd0) (0x40015b92c0) Stream removed, broadcasting: 1
I0821 12:39:47.238551      10 log.go:172] (0x4004a48fd0) (0x400247e8c0) Stream removed, broadcasting: 3
I0821 12:39:47.238636      10 log.go:172] (0x4004a48fd0) (0x40015b9360) Stream removed, broadcasting: 5
Aug 21 12:39:47.238: INFO: Exec stderr: ""
Aug 21 12:39:47.238: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:47.239: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:47.312537      10 log.go:172] (0x400567c370) (0x4002f80640) Create stream
I0821 12:39:47.312839      10 log.go:172] (0x400567c370) (0x4002f80640) Stream added, broadcasting: 1
I0821 12:39:47.317394      10 log.go:172] (0x400567c370) Reply frame received for 1
I0821 12:39:47.317668      10 log.go:172] (0x400567c370) (0x4002f806e0) Create stream
I0821 12:39:47.317786      10 log.go:172] (0x400567c370) (0x4002f806e0) Stream added, broadcasting: 3
I0821 12:39:47.319454      10 log.go:172] (0x400567c370) Reply frame received for 3
I0821 12:39:47.319579      10 log.go:172] (0x400567c370) (0x4002f80780) Create stream
I0821 12:39:47.319657      10 log.go:172] (0x400567c370) (0x4002f80780) Stream added, broadcasting: 5
I0821 12:39:47.321256      10 log.go:172] (0x400567c370) Reply frame received for 5
I0821 12:39:47.392922      10 log.go:172] (0x400567c370) Data frame received for 5
I0821 12:39:47.393141      10 log.go:172] (0x4002f80780) (5) Data frame handling
I0821 12:39:47.393304      10 log.go:172] (0x400567c370) Data frame received for 3
I0821 12:39:47.393451      10 log.go:172] (0x4002f806e0) (3) Data frame handling
I0821 12:39:47.393578      10 log.go:172] (0x4002f806e0) (3) Data frame sent
I0821 12:39:47.393717      10 log.go:172] (0x400567c370) Data frame received for 3
I0821 12:39:47.393851      10 log.go:172] (0x4002f806e0) (3) Data frame handling
I0821 12:39:47.395476      10 log.go:172] (0x400567c370) Data frame received for 1
I0821 12:39:47.395576      10 log.go:172] (0x4002f80640) (1) Data frame handling
I0821 12:39:47.395670      10 log.go:172] (0x4002f80640) (1) Data frame sent
I0821 12:39:47.395759      10 log.go:172] (0x400567c370) (0x4002f80640) Stream removed, broadcasting: 1
I0821 12:39:47.395860      10 log.go:172] (0x400567c370) Go away received
I0821 12:39:47.396231      10 log.go:172] (0x400567c370) (0x4002f80640) Stream removed, broadcasting: 1
I0821 12:39:47.396332      10 log.go:172] (0x400567c370) (0x4002f806e0) Stream removed, broadcasting: 3
I0821 12:39:47.396468      10 log.go:172] (0x400567c370) (0x4002f80780) Stream removed, broadcasting: 5
Aug 21 12:39:47.396: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 21 12:39:47.396: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:47.396: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:47.469062      10 log.go:172] (0x40051e26e0) (0x400164a460) Create stream
I0821 12:39:47.469253      10 log.go:172] (0x40051e26e0) (0x400164a460) Stream added, broadcasting: 1
I0821 12:39:47.474247      10 log.go:172] (0x40051e26e0) Reply frame received for 1
I0821 12:39:47.474428      10 log.go:172] (0x40051e26e0) (0x400164a500) Create stream
I0821 12:39:47.474524      10 log.go:172] (0x40051e26e0) (0x400164a500) Stream added, broadcasting: 3
I0821 12:39:47.476705      10 log.go:172] (0x40051e26e0) Reply frame received for 3
I0821 12:39:47.476922      10 log.go:172] (0x40051e26e0) (0x4002f80820) Create stream
I0821 12:39:47.476999      10 log.go:172] (0x40051e26e0) (0x4002f80820) Stream added, broadcasting: 5
I0821 12:39:47.478884      10 log.go:172] (0x40051e26e0) Reply frame received for 5
I0821 12:39:47.541418      10 log.go:172] (0x40051e26e0) Data frame received for 5
I0821 12:39:47.541597      10 log.go:172] (0x4002f80820) (5) Data frame handling
I0821 12:39:47.541721      10 log.go:172] (0x40051e26e0) Data frame received for 3
I0821 12:39:47.541807      10 log.go:172] (0x400164a500) (3) Data frame handling
I0821 12:39:47.541899      10 log.go:172] (0x400164a500) (3) Data frame sent
I0821 12:39:47.542000      10 log.go:172] (0x40051e26e0) Data frame received for 3
I0821 12:39:47.542093      10 log.go:172] (0x400164a500) (3) Data frame handling
I0821 12:39:47.542946      10 log.go:172] (0x40051e26e0) Data frame received for 1
I0821 12:39:47.543039      10 log.go:172] (0x400164a460) (1) Data frame handling
I0821 12:39:47.543124      10 log.go:172] (0x400164a460) (1) Data frame sent
I0821 12:39:47.543209      10 log.go:172] (0x40051e26e0) (0x400164a460) Stream removed, broadcasting: 1
I0821 12:39:47.543314      10 log.go:172] (0x40051e26e0) Go away received
I0821 12:39:47.543758      10 log.go:172] (0x40051e26e0) (0x400164a460) Stream removed, broadcasting: 1
I0821 12:39:47.543967      10 log.go:172] (0x40051e26e0) (0x400164a500) Stream removed, broadcasting: 3
I0821 12:39:47.544110      10 log.go:172] (0x40051e26e0) (0x4002f80820) Stream removed, broadcasting: 5
Aug 21 12:39:47.544: INFO: Exec stderr: ""
Aug 21 12:39:47.544: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:47.544: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:47.601920      10 log.go:172] (0x4004a49600) (0x40015b99a0) Create stream
I0821 12:39:47.602077      10 log.go:172] (0x4004a49600) (0x40015b99a0) Stream added, broadcasting: 1
I0821 12:39:47.606364      10 log.go:172] (0x4004a49600) Reply frame received for 1
I0821 12:39:47.606660      10 log.go:172] (0x4004a49600) (0x400164a640) Create stream
I0821 12:39:47.606841      10 log.go:172] (0x4004a49600) (0x400164a640) Stream added, broadcasting: 3
I0821 12:39:47.609207      10 log.go:172] (0x4004a49600) Reply frame received for 3
I0821 12:39:47.609408      10 log.go:172] (0x4004a49600) (0x400247ea00) Create stream
I0821 12:39:47.609518      10 log.go:172] (0x4004a49600) (0x400247ea00) Stream added, broadcasting: 5
I0821 12:39:47.611438      10 log.go:172] (0x4004a49600) Reply frame received for 5
I0821 12:39:47.679199      10 log.go:172] (0x4004a49600) Data frame received for 3
I0821 12:39:47.679427      10 log.go:172] (0x400164a640) (3) Data frame handling
I0821 12:39:47.679540      10 log.go:172] (0x400164a640) (3) Data frame sent
I0821 12:39:47.679670      10 log.go:172] (0x4004a49600) Data frame received for 3
I0821 12:39:47.679842      10 log.go:172] (0x400164a640) (3) Data frame handling
I0821 12:39:47.680070      10 log.go:172] (0x4004a49600) Data frame received for 5
I0821 12:39:47.680186      10 log.go:172] (0x400247ea00) (5) Data frame handling
I0821 12:39:47.680366      10 log.go:172] (0x4004a49600) Data frame received for 1
I0821 12:39:47.680454      10 log.go:172] (0x40015b99a0) (1) Data frame handling
I0821 12:39:47.680555      10 log.go:172] (0x40015b99a0) (1) Data frame sent
I0821 12:39:47.680661      10 log.go:172] (0x4004a49600) (0x40015b99a0) Stream removed, broadcasting: 1
I0821 12:39:47.680878      10 log.go:172] (0x4004a49600) Go away received
I0821 12:39:47.681311      10 log.go:172] (0x4004a49600) (0x40015b99a0) Stream removed, broadcasting: 1
I0821 12:39:47.681440      10 log.go:172] (0x4004a49600) (0x400164a640) Stream removed, broadcasting: 3
I0821 12:39:47.681548      10 log.go:172] (0x4004a49600) (0x400247ea00) Stream removed, broadcasting: 5
Aug 21 12:39:47.681: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 21 12:39:47.682: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:47.682: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:47.743783      10 log.go:172] (0x40051e2d10) (0x400164adc0) Create stream
I0821 12:39:47.744040      10 log.go:172] (0x40051e2d10) (0x400164adc0) Stream added, broadcasting: 1
I0821 12:39:47.748113      10 log.go:172] (0x40051e2d10) Reply frame received for 1
I0821 12:39:47.748389      10 log.go:172] (0x40051e2d10) (0x400247ebe0) Create stream
I0821 12:39:47.748535      10 log.go:172] (0x40051e2d10) (0x400247ebe0) Stream added, broadcasting: 3
I0821 12:39:47.750155      10 log.go:172] (0x40051e2d10) Reply frame received for 3
I0821 12:39:47.750291      10 log.go:172] (0x40051e2d10) (0x4000aac820) Create stream
I0821 12:39:47.750353      10 log.go:172] (0x40051e2d10) (0x4000aac820) Stream added, broadcasting: 5
I0821 12:39:47.751732      10 log.go:172] (0x40051e2d10) Reply frame received for 5
I0821 12:39:47.812992      10 log.go:172] (0x40051e2d10) Data frame received for 5
I0821 12:39:47.813232      10 log.go:172] (0x4000aac820) (5) Data frame handling
I0821 12:39:47.813431      10 log.go:172] (0x40051e2d10) Data frame received for 3
I0821 12:39:47.813534      10 log.go:172] (0x400247ebe0) (3) Data frame handling
I0821 12:39:47.813620      10 log.go:172] (0x400247ebe0) (3) Data frame sent
I0821 12:39:47.813679      10 log.go:172] (0x40051e2d10) Data frame received for 3
I0821 12:39:47.813733      10 log.go:172] (0x400247ebe0) (3) Data frame handling
I0821 12:39:47.814795      10 log.go:172] (0x40051e2d10) Data frame received for 1
I0821 12:39:47.814852      10 log.go:172] (0x400164adc0) (1) Data frame handling
I0821 12:39:47.814908      10 log.go:172] (0x400164adc0) (1) Data frame sent
I0821 12:39:47.814974      10 log.go:172] (0x40051e2d10) (0x400164adc0) Stream removed, broadcasting: 1
I0821 12:39:47.815197      10 log.go:172] (0x40051e2d10) Go away received
I0821 12:39:47.815429      10 log.go:172] (0x40051e2d10) (0x400164adc0) Stream removed, broadcasting: 1
I0821 12:39:47.815628      10 log.go:172] (0x40051e2d10) (0x400247ebe0) Stream removed, broadcasting: 3
I0821 12:39:47.815756      10 log.go:172] (0x40051e2d10) (0x4000aac820) Stream removed, broadcasting: 5
Aug 21 12:39:47.815: INFO: Exec stderr: ""
Aug 21 12:39:47.816: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:47.816: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:47.876498      10 log.go:172] (0x400567c9a0) (0x4002f80a00) Create stream
I0821 12:39:47.876676      10 log.go:172] (0x400567c9a0) (0x4002f80a00) Stream added, broadcasting: 1
I0821 12:39:47.894220      10 log.go:172] (0x400567c9a0) Reply frame received for 1
I0821 12:39:47.895532      10 log.go:172] (0x400567c9a0) (0x4002a56000) Create stream
I0821 12:39:47.896135      10 log.go:172] (0x400567c9a0) (0x4002a56000) Stream added, broadcasting: 3
I0821 12:39:47.900301      10 log.go:172] (0x400567c9a0) Reply frame received for 3
I0821 12:39:47.901019      10 log.go:172] (0x400567c9a0) (0x4001e140a0) Create stream
I0821 12:39:47.901256      10 log.go:172] (0x400567c9a0) (0x4001e140a0) Stream added, broadcasting: 5
I0821 12:39:47.906026      10 log.go:172] (0x400567c9a0) Reply frame received for 5
I0821 12:39:47.954357      10 log.go:172] (0x400567c9a0) Data frame received for 3
I0821 12:39:47.954538      10 log.go:172] (0x400567c9a0) Data frame received for 5
I0821 12:39:47.954685      10 log.go:172] (0x4001e140a0) (5) Data frame handling
I0821 12:39:47.954791      10 log.go:172] (0x4002a56000) (3) Data frame handling
I0821 12:39:47.954934      10 log.go:172] (0x4002a56000) (3) Data frame sent
I0821 12:39:47.955065      10 log.go:172] (0x400567c9a0) Data frame received for 3
I0821 12:39:47.955195      10 log.go:172] (0x4002a56000) (3) Data frame handling
I0821 12:39:47.955367      10 log.go:172] (0x400567c9a0) Data frame received for 1
I0821 12:39:47.955470      10 log.go:172] (0x4002f80a00) (1) Data frame handling
I0821 12:39:47.955561      10 log.go:172] (0x4002f80a00) (1) Data frame sent
I0821 12:39:47.955683      10 log.go:172] (0x400567c9a0) (0x4002f80a00) Stream removed, broadcasting: 1
I0821 12:39:47.955840      10 log.go:172] (0x400567c9a0) Go away received
I0821 12:39:47.956122      10 log.go:172] (0x400567c9a0) (0x4002f80a00) Stream removed, broadcasting: 1
I0821 12:39:47.956216      10 log.go:172] (0x400567c9a0) (0x4002a56000) Stream removed, broadcasting: 3
I0821 12:39:47.956293      10 log.go:172] (0x400567c9a0) (0x4001e140a0) Stream removed, broadcasting: 5
Aug 21 12:39:47.956: INFO: Exec stderr: ""
Aug 21 12:39:47.956: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:47.956: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:48.006276      10 log.go:172] (0x40021704d0) (0x4001bfa320) Create stream
I0821 12:39:48.006373      10 log.go:172] (0x40021704d0) (0x4001bfa320) Stream added, broadcasting: 1
I0821 12:39:48.008696      10 log.go:172] (0x40021704d0) Reply frame received for 1
I0821 12:39:48.008898      10 log.go:172] (0x40021704d0) (0x4000d6e280) Create stream
I0821 12:39:48.008971      10 log.go:172] (0x40021704d0) (0x4000d6e280) Stream added, broadcasting: 3
I0821 12:39:48.010011      10 log.go:172] (0x40021704d0) Reply frame received for 3
I0821 12:39:48.010153      10 log.go:172] (0x40021704d0) (0x4001bfa3c0) Create stream
I0821 12:39:48.010218      10 log.go:172] (0x40021704d0) (0x4001bfa3c0) Stream added, broadcasting: 5
I0821 12:39:48.011270      10 log.go:172] (0x40021704d0) Reply frame received for 5
I0821 12:39:48.485749      10 log.go:172] (0x40021704d0) Data frame received for 5
I0821 12:39:48.485868      10 log.go:172] (0x4001bfa3c0) (5) Data frame handling
I0821 12:39:48.486038      10 log.go:172] (0x40021704d0) Data frame received for 3
I0821 12:39:48.486158      10 log.go:172] (0x4000d6e280) (3) Data frame handling
I0821 12:39:48.486268      10 log.go:172] (0x4000d6e280) (3) Data frame sent
I0821 12:39:48.486355      10 log.go:172] (0x40021704d0) Data frame received for 3
I0821 12:39:48.486458      10 log.go:172] (0x4000d6e280) (3) Data frame handling
I0821 12:39:48.487078      10 log.go:172] (0x40021704d0) Data frame received for 1
I0821 12:39:48.487198      10 log.go:172] (0x4001bfa320) (1) Data frame handling
I0821 12:39:48.487339      10 log.go:172] (0x4001bfa320) (1) Data frame sent
I0821 12:39:48.487476      10 log.go:172] (0x40021704d0) (0x4001bfa320) Stream removed, broadcasting: 1
I0821 12:39:48.487583      10 log.go:172] (0x40021704d0) Go away received
I0821 12:39:48.487785      10 log.go:172] (0x40021704d0) (0x4001bfa320) Stream removed, broadcasting: 1
I0821 12:39:48.487903      10 log.go:172] (0x40021704d0) (0x4000d6e280) Stream removed, broadcasting: 3
I0821 12:39:48.487995      10 log.go:172] (0x40021704d0) (0x4001bfa3c0) Stream removed, broadcasting: 5
Aug 21 12:39:48.488: INFO: Exec stderr: ""
Aug 21 12:39:48.488: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1393 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:39:48.488: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:39:48.562772      10 log.go:172] (0x400305a000) (0x40019f4140) Create stream
I0821 12:39:48.562934      10 log.go:172] (0x400305a000) (0x40019f4140) Stream added, broadcasting: 1
I0821 12:39:48.566178      10 log.go:172] (0x400305a000) Reply frame received for 1
I0821 12:39:48.566335      10 log.go:172] (0x400305a000) (0x4000920640) Create stream
I0821 12:39:48.566416      10 log.go:172] (0x400305a000) (0x4000920640) Stream added, broadcasting: 3
I0821 12:39:48.567767      10 log.go:172] (0x400305a000) Reply frame received for 3
I0821 12:39:48.567942      10 log.go:172] (0x400305a000) (0x40019f4280) Create stream
I0821 12:39:48.568045      10 log.go:172] (0x400305a000) (0x40019f4280) Stream added, broadcasting: 5
I0821 12:39:48.569880      10 log.go:172] (0x400305a000) Reply frame received for 5
I0821 12:39:48.632997      10 log.go:172] (0x400305a000) Data frame received for 5
I0821 12:39:48.633149      10 log.go:172] (0x40019f4280) (5) Data frame handling
I0821 12:39:48.633312      10 log.go:172] (0x400305a000) Data frame received for 3
I0821 12:39:48.633450      10 log.go:172] (0x4000920640) (3) Data frame handling
I0821 12:39:48.633643      10 log.go:172] (0x4000920640) (3) Data frame sent
I0821 12:39:48.633820      10 log.go:172] (0x400305a000) Data frame received for 3
I0821 12:39:48.633991      10 log.go:172] (0x4000920640) (3) Data frame handling
I0821 12:39:48.634256      10 log.go:172] (0x400305a000) Data frame received for 1
I0821 12:39:48.634386      10 log.go:172] (0x40019f4140) (1) Data frame handling
I0821 12:39:48.634507      10 log.go:172] (0x40019f4140) (1) Data frame sent
I0821 12:39:48.634655      10 log.go:172] (0x400305a000) (0x40019f4140) Stream removed, broadcasting: 1
I0821 12:39:48.634817      10 log.go:172] (0x400305a000) Go away received
I0821 12:39:48.635090      10 log.go:172] (0x400305a000) (0x40019f4140) Stream removed, broadcasting: 1
I0821 12:39:48.635230      10 log.go:172] (0x400305a000) (0x4000920640) Stream removed, broadcasting: 3
I0821 12:39:48.635366      10 log.go:172] (0x400305a000) (0x40019f4280) Stream removed, broadcasting: 5
Aug 21 12:39:48.635: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:39:48.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1393" for this suite.

• [SLOW TEST:12.088 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1965,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:39:48.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 21 12:39:48.726: INFO: Waiting up to 5m0s for pod "downward-api-d5b82280-7ca4-4ea3-85d5-4be3ac93e357" in namespace "downward-api-198" to be "Succeeded or Failed"
Aug 21 12:39:48.731: INFO: Pod "downward-api-d5b82280-7ca4-4ea3-85d5-4be3ac93e357": Phase="Pending", Reason="", readiness=false. Elapsed: 5.014278ms
Aug 21 12:39:50.738: INFO: Pod "downward-api-d5b82280-7ca4-4ea3-85d5-4be3ac93e357": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011893106s
Aug 21 12:39:52.828: INFO: Pod "downward-api-d5b82280-7ca4-4ea3-85d5-4be3ac93e357": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101790571s
Aug 21 12:39:54.898: INFO: Pod "downward-api-d5b82280-7ca4-4ea3-85d5-4be3ac93e357": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17180975s
STEP: Saw pod success
Aug 21 12:39:54.898: INFO: Pod "downward-api-d5b82280-7ca4-4ea3-85d5-4be3ac93e357" satisfied condition "Succeeded or Failed"
Aug 21 12:39:54.917: INFO: Trying to get logs from node kali-worker2 pod downward-api-d5b82280-7ca4-4ea3-85d5-4be3ac93e357 container dapi-container: 
STEP: delete the pod
Aug 21 12:39:55.234: INFO: Waiting for pod downward-api-d5b82280-7ca4-4ea3-85d5-4be3ac93e357 to disappear
Aug 21 12:39:55.238: INFO: Pod downward-api-d5b82280-7ca4-4ea3-85d5-4be3ac93e357 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:39:55.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-198" for this suite.

• [SLOW TEST:6.636 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1965,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:39:55.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:39:55.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f968e66a-e8fa-4b08-afc3-2b8522c89950" in namespace "projected-9280" to be "Succeeded or Failed"
Aug 21 12:39:55.750: INFO: Pod "downwardapi-volume-f968e66a-e8fa-4b08-afc3-2b8522c89950": Phase="Pending", Reason="", readiness=false. Elapsed: 157.707813ms
Aug 21 12:39:57.757: INFO: Pod "downwardapi-volume-f968e66a-e8fa-4b08-afc3-2b8522c89950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165538696s
Aug 21 12:39:59.767: INFO: Pod "downwardapi-volume-f968e66a-e8fa-4b08-afc3-2b8522c89950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174683703s
STEP: Saw pod success
Aug 21 12:39:59.767: INFO: Pod "downwardapi-volume-f968e66a-e8fa-4b08-afc3-2b8522c89950" satisfied condition "Succeeded or Failed"
Aug 21 12:39:59.773: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f968e66a-e8fa-4b08-afc3-2b8522c89950 container client-container: 
STEP: delete the pod
Aug 21 12:39:59.985: INFO: Waiting for pod downwardapi-volume-f968e66a-e8fa-4b08-afc3-2b8522c89950 to disappear
Aug 21 12:40:00.036: INFO: Pod downwardapi-volume-f968e66a-e8fa-4b08-afc3-2b8522c89950 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:40:00.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9280" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1974,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:40:00.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 21 12:40:00.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7001'
Aug 21 12:40:06.737: INFO: stderr: ""
Aug 21 12:40:06.737: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
Aug 21 12:40:06.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7001'
Aug 21 12:40:19.305: INFO: stderr: ""
Aug 21 12:40:19.305: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:40:19.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7001" for this suite.

• [SLOW TEST:19.389 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":119,"skipped":1992,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:40:19.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:40:19.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28a00650-966b-41fa-b810-2bfa2a612f7b" in namespace "projected-5827" to be "Succeeded or Failed"
Aug 21 12:40:19.703: INFO: Pod "downwardapi-volume-28a00650-966b-41fa-b810-2bfa2a612f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.519025ms
Aug 21 12:40:21.828: INFO: Pod "downwardapi-volume-28a00650-966b-41fa-b810-2bfa2a612f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178647119s
Aug 21 12:40:23.835: INFO: Pod "downwardapi-volume-28a00650-966b-41fa-b810-2bfa2a612f7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185538354s
STEP: Saw pod success
Aug 21 12:40:23.836: INFO: Pod "downwardapi-volume-28a00650-966b-41fa-b810-2bfa2a612f7b" satisfied condition "Succeeded or Failed"
Aug 21 12:40:23.839: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-28a00650-966b-41fa-b810-2bfa2a612f7b container client-container: 
STEP: delete the pod
Aug 21 12:40:23.862: INFO: Waiting for pod downwardapi-volume-28a00650-966b-41fa-b810-2bfa2a612f7b to disappear
Aug 21 12:40:23.875: INFO: Pod downwardapi-volume-28a00650-966b-41fa-b810-2bfa2a612f7b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:40:23.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5827" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":1993,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:40:23.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Aug 21 12:40:24.404: INFO: Waiting up to 5m0s for pod "var-expansion-67167bf0-53ac-4846-b036-8e3a11a22e26" in namespace "var-expansion-7586" to be "Succeeded or Failed"
Aug 21 12:40:24.522: INFO: Pod "var-expansion-67167bf0-53ac-4846-b036-8e3a11a22e26": Phase="Pending", Reason="", readiness=false. Elapsed: 118.085947ms
Aug 21 12:40:26.528: INFO: Pod "var-expansion-67167bf0-53ac-4846-b036-8e3a11a22e26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123449165s
Aug 21 12:40:28.533: INFO: Pod "var-expansion-67167bf0-53ac-4846-b036-8e3a11a22e26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128880471s
Aug 21 12:40:30.538: INFO: Pod "var-expansion-67167bf0-53ac-4846-b036-8e3a11a22e26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13396765s
STEP: Saw pod success
Aug 21 12:40:30.538: INFO: Pod "var-expansion-67167bf0-53ac-4846-b036-8e3a11a22e26" satisfied condition "Succeeded or Failed"
Aug 21 12:40:30.542: INFO: Trying to get logs from node kali-worker pod var-expansion-67167bf0-53ac-4846-b036-8e3a11a22e26 container dapi-container: 
STEP: delete the pod
Aug 21 12:40:30.586: INFO: Waiting for pod var-expansion-67167bf0-53ac-4846-b036-8e3a11a22e26 to disappear
Aug 21 12:40:30.616: INFO: Pod var-expansion-67167bf0-53ac-4846-b036-8e3a11a22e26 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:40:30.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7586" for this suite.

• [SLOW TEST:6.689 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2011,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:40:30.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-lmzw
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 12:40:30.747: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lmzw" in namespace "subpath-1711" to be "Succeeded or Failed"
Aug 21 12:40:30.783: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Pending", Reason="", readiness=false. Elapsed: 35.040516ms
Aug 21 12:40:32.788: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0406021s
Aug 21 12:40:34.809: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 4.061920885s
Aug 21 12:40:36.821: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 6.072975665s
Aug 21 12:40:38.828: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 8.080379895s
Aug 21 12:40:40.835: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 10.08737572s
Aug 21 12:40:42.843: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 12.095310451s
Aug 21 12:40:44.850: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 14.102392565s
Aug 21 12:40:46.858: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 16.11045873s
Aug 21 12:40:48.865: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 18.11698147s
Aug 21 12:40:50.873: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 20.12497379s
Aug 21 12:40:52.880: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Running", Reason="", readiness=true. Elapsed: 22.132653411s
Aug 21 12:40:54.912: INFO: Pod "pod-subpath-test-configmap-lmzw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.164074579s
STEP: Saw pod success
Aug 21 12:40:54.912: INFO: Pod "pod-subpath-test-configmap-lmzw" satisfied condition "Succeeded or Failed"
Aug 21 12:40:54.917: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-lmzw container test-container-subpath-configmap-lmzw: 
STEP: delete the pod
Aug 21 12:40:54.970: INFO: Waiting for pod pod-subpath-test-configmap-lmzw to disappear
Aug 21 12:40:54.983: INFO: Pod pod-subpath-test-configmap-lmzw no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lmzw
Aug 21 12:40:54.983: INFO: Deleting pod "pod-subpath-test-configmap-lmzw" in namespace "subpath-1711"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:40:54.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1711" for this suite.

• [SLOW TEST:24.378 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":122,"skipped":2037,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:40:55.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-62d3230b-2ba3-4bf9-a721-07cc462f5a1f
STEP: Creating secret with name secret-projected-all-test-volume-ab1adbc7-e3eb-4a20-a442-ca7dfd0282b6
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 21 12:40:55.553: INFO: Waiting up to 5m0s for pod "projected-volume-43ed8fa2-bf53-43be-b8b2-b15676b42a7b" in namespace "projected-290" to be "Succeeded or Failed"
Aug 21 12:40:55.559: INFO: Pod "projected-volume-43ed8fa2-bf53-43be-b8b2-b15676b42a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.679226ms
Aug 21 12:40:57.643: INFO: Pod "projected-volume-43ed8fa2-bf53-43be-b8b2-b15676b42a7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090078301s
Aug 21 12:40:59.650: INFO: Pod "projected-volume-43ed8fa2-bf53-43be-b8b2-b15676b42a7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096805664s
STEP: Saw pod success
Aug 21 12:40:59.650: INFO: Pod "projected-volume-43ed8fa2-bf53-43be-b8b2-b15676b42a7b" satisfied condition "Succeeded or Failed"
Aug 21 12:40:59.661: INFO: Trying to get logs from node kali-worker2 pod projected-volume-43ed8fa2-bf53-43be-b8b2-b15676b42a7b container projected-all-volume-test: 
STEP: delete the pod
Aug 21 12:40:59.695: INFO: Waiting for pod projected-volume-43ed8fa2-bf53-43be-b8b2-b15676b42a7b to disappear
Aug 21 12:40:59.709: INFO: Pod projected-volume-43ed8fa2-bf53-43be-b8b2-b15676b42a7b no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:40:59.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-290" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2053,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:40:59.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 21 12:41:04.386: INFO: Successfully updated pod "annotationupdate2b121f3f-7f77-4a2b-8a5f-cbdd20f83e3e"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:41:08.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1679" for this suite.

• [SLOW TEST:8.705 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2101,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:41:08.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:41:10.707: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:41:12.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610470, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610470, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610470, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610470, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:41:15.760: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:41:15.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2513" for this suite.
STEP: Destroying namespace "webhook-2513-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.455 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":125,"skipped":2108,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:41:15.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:41:16.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5621" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":126,"skipped":2112,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:41:16.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 21 12:41:26.637: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 12:41:26.671: INFO: Pod pod-with-poststart-http-hook still exists
Aug 21 12:41:28.672: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 12:41:28.678: INFO: Pod pod-with-poststart-http-hook still exists
Aug 21 12:41:30.672: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 21 12:41:30.680: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:41:30.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1489" for this suite.

• [SLOW TEST:14.551 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2135,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:41:30.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:41:32.424: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:41:34.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610492, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610492, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610492, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610492, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:41:36.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610492, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610492, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610492, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610492, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:41:39.506: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:41:40.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9541" for this suite.
STEP: Destroying namespace "webhook-9541-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.712 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":128,"skipped":2149,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:41:40.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 12:41:40.601: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:41:40.656: INFO: Number of nodes with available pods: 0
Aug 21 12:41:40.656: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:41:41.719: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:41:41.725: INFO: Number of nodes with available pods: 0
Aug 21 12:41:41.725: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:41:42.886: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:41:42.910: INFO: Number of nodes with available pods: 0
Aug 21 12:41:42.910: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:41:43.664: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:41:43.670: INFO: Number of nodes with available pods: 0
Aug 21 12:41:43.670: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:41:44.692: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:41:44.698: INFO: Number of nodes with available pods: 2
Aug 21 12:41:44.698: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 21 12:41:44.746: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:41:44.836: INFO: Number of nodes with available pods: 1
Aug 21 12:41:44.836: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:41:45.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:41:45.929: INFO: Number of nodes with available pods: 1
Aug 21 12:41:45.930: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:41:46.849: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:41:46.856: INFO: Number of nodes with available pods: 1
Aug 21 12:41:46.856: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:41:47.848: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:41:47.856: INFO: Number of nodes with available pods: 2
Aug 21 12:41:47.856: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2300, will wait for the garbage collector to delete the pods
Aug 21 12:41:47.944: INFO: Deleting DaemonSet.extensions daemon-set took: 21.945047ms
Aug 21 12:41:48.045: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.689163ms
Aug 21 12:41:59.151: INFO: Number of nodes with available pods: 0
Aug 21 12:41:59.151: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 12:41:59.155: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2300/daemonsets","resourceVersion":"2119749"},"items":null}

Aug 21 12:41:59.160: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2300/pods","resourceVersion":"2119749"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:41:59.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2300" for this suite.

• [SLOW TEST:18.786 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":129,"skipped":2164,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:41:59.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-9386fe2f-6e7b-4d33-addf-c5f0aef6ba9b
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:41:59.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9475" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":130,"skipped":2175,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:41:59.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5876.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5876.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 12:42:05.534: INFO: DNS probes using dns-test-34c1a3a5-31d2-4760-b9c5-6180243bab7f succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5876.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5876.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 12:42:13.674: INFO: File wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:13.678: INFO: File jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:13.678: INFO: Lookups using dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 failed for: [wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local]

Aug 21 12:42:18.685: INFO: File wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:18.715: INFO: File jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:18.716: INFO: Lookups using dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 failed for: [wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local]

Aug 21 12:42:23.685: INFO: File wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:23.691: INFO: File jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:23.691: INFO: Lookups using dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 failed for: [wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local]

Aug 21 12:42:28.684: INFO: File wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:28.690: INFO: File jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:28.690: INFO: Lookups using dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 failed for: [wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local]

Aug 21 12:42:33.685: INFO: File wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:33.691: INFO: File jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local from pod  dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 21 12:42:33.691: INFO: Lookups using dns-5876/dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 failed for: [wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local]

Aug 21 12:42:38.708: INFO: DNS probes using dns-test-8c64d563-71ef-4137-a9f0-844ac1953ba1 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5876.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5876.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5876.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5876.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 12:42:47.744: INFO: DNS probes using dns-test-03e1136f-43d0-430e-9831-26ed69663c6b succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:42:47.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5876" for this suite.

• [SLOW TEST:48.549 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":131,"skipped":2181,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:42:47.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:42:48.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-985'
Aug 21 12:42:49.961: INFO: stderr: ""
Aug 21 12:42:49.961: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 21 12:42:49.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-985'
Aug 21 12:42:51.576: INFO: stderr: ""
Aug 21 12:42:51.576: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 21 12:42:52.583: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 12:42:52.583: INFO: Found 0 / 1
Aug 21 12:42:53.584: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 12:42:53.584: INFO: Found 1 / 1
Aug 21 12:42:53.584: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 21 12:42:53.589: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 12:42:53.589: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 21 12:42:53.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe pod agnhost-master-gwd4l --namespace=kubectl-985'
Aug 21 12:42:54.949: INFO: stderr: ""
Aug 21 12:42:54.949: INFO: stdout: "Name:         agnhost-master-gwd4l\nNamespace:    kubectl-985\nPriority:     0\nNode:         kali-worker2/172.18.0.13\nStart Time:   Fri, 21 Aug 2020 12:42:50 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.177\nIPs:\n  IP:           10.244.1.177\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://959da2b5c85d81b974ba5781808d92899ee59702e7dc16c2762d0be291854901\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 21 Aug 2020 12:42:52 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8zmn2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-8zmn2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-8zmn2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                   Message\n  ----    ------     ----       ----                   -------\n  Normal  Scheduled    default-scheduler      Successfully assigned kubectl-985/agnhost-master-gwd4l to kali-worker2\n  Normal  Pulled     3s         kubelet, kali-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s         kubelet, kali-worker2  Created container agnhost-master\n  Normal  Started    2s         kubelet, kali-worker2  Started container agnhost-master\n"
Aug 21 12:42:54.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-985'
Aug 21 12:42:56.404: INFO: stderr: ""
Aug 21 12:42:56.404: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-985\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  6s    replication-controller  Created pod: agnhost-master-gwd4l\n"
Aug 21 12:42:56.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-985'
Aug 21 12:42:57.702: INFO: stderr: ""
Aug 21 12:42:57.702: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-985\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.109.9.157\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.177:6379\nSession Affinity:  None\nEvents:            \n"
Aug 21 12:42:57.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe node kali-control-plane'
Aug 21 12:42:59.134: INFO: stderr: ""
Aug 21 12:42:59.134: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:39:46 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Fri, 21 Aug 2020 12:42:56 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 21 Aug 2020 12:40:25 +0000   Sat, 15 Aug 2020 09:39:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 21 Aug 2020 12:40:25 +0000   Sat, 15 Aug 2020 09:39:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 21 Aug 2020 12:40:25 +0000   Sat, 15 Aug 2020 09:39:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 21 Aug 2020 12:40:25 +0000   Sat, 15 Aug 2020 09:40:21 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.15\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 04bdd55b92ef4b87b98c1323984fd428\n  System UUID:                98a7b883-5496-49b8-a15e-cf216c9b1f46\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu Groovy Gorilla (development branch)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.0-rc.1-4-g43366250\n  Kubelet Version:            v1.18.8\n  Kube-Proxy Version:         v1.18.8\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-2567d                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     6d3h\n  kube-system                 coredns-66bff467f8-k8c2r                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     6d3h\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d3h\n  kube-system                 kindnet-gblkw                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      6d3h\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         6d3h\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         6d3h\n  kube-system                 kube-proxy-2d447                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d3h\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         6d3h\n  local-path-storage          local-path-provisioner-5b4b545c55-988r4       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d3h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
Aug 21 12:42:59.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config describe namespace kubectl-985'
Aug 21 12:43:00.429: INFO: stderr: ""
Aug 21 12:43:00.429: INFO: stdout: "Name:         kubectl-985\nLabels:       e2e-framework=kubectl\n              e2e-run=ef46a63a-f611-4a2c-8bf6-b2793b3b0eb3\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:43:00.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-985" for this suite.

• [SLOW TEST:12.534 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":132,"skipped":2190,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:43:00.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:43:00.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 21 12:43:01.192: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T12:43:01Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T12:43:01Z]] name:name1 resourceVersion:2120111 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:759a60a4-5132-495c-ad50-7a4078c54000] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 21 12:43:11.205: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T12:43:11Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T12:43:11Z]] name:name2 resourceVersion:2120158 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7615c283-6f7f-4c06-a400-b6330e06a8e0] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 21 12:43:21.389: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T12:43:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T12:43:21Z]] name:name1 resourceVersion:2120190 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:759a60a4-5132-495c-ad50-7a4078c54000] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 21 12:43:31.397: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T12:43:11Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T12:43:31Z]] name:name2 resourceVersion:2120220 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7615c283-6f7f-4c06-a400-b6330e06a8e0] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 21 12:43:41.448: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T12:43:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T12:43:21Z]] name:name1 resourceVersion:2120249 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:759a60a4-5132-495c-ad50-7a4078c54000] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 21 12:43:51.462: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-21T12:43:11Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-21T12:43:31Z]] name:name2 resourceVersion:2120278 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7615c283-6f7f-4c06-a400-b6330e06a8e0] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:44:01.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-8722" for this suite.

• [SLOW TEST:61.938 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":133,"skipped":2234,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:44:02.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 12:44:08.614: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:44:08.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7522" for this suite.

• [SLOW TEST:6.277 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2304,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:44:08.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:44:15.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1892" for this suite.
STEP: Destroying namespace "nsdeletetest-9241" for this suite.
Aug 21 12:44:15.111: INFO: Namespace nsdeletetest-9241 was already deleted
STEP: Destroying namespace "nsdeletetest-6512" for this suite.

• [SLOW TEST:6.446 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":135,"skipped":2305,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:44:15.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 12:44:19.279: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:44:19.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9881" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2331,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:44:19.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-2581
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2581
STEP: Deleting pre-stop pod
Aug 21 12:44:34.700: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:44:34.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2581" for this suite.

• [SLOW TEST:15.559 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":137,"skipped":2331,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:44:34.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9829 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9829;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9829 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9829;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9829.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9829.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9829.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9829.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9829.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9829.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9829.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9829.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9829.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9829.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9829.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 201.52.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.52.201_udp@PTR;check="$$(dig +tcp +noall +answer +search 201.52.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.52.201_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9829 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9829;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9829 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9829;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9829.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9829.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9829.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9829.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9829.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9829.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9829.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9829.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9829.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9829.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9829.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9829.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 201.52.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.52.201_udp@PTR;check="$$(dig +tcp +noall +answer +search 201.52.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.52.201_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 12:44:48.981: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:48.985: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:48.989: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:48.993: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:48.997: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.002: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.006: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.011: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.039: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.043: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.047: INFO: Unable to read jessie_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.050: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.054: INFO: Unable to read jessie_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.058: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.061: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.065: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:49.088: INFO: Lookups using dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9829 wheezy_tcp@dns-test-service.dns-9829 wheezy_udp@dns-test-service.dns-9829.svc wheezy_tcp@dns-test-service.dns-9829.svc wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9829 jessie_tcp@dns-test-service.dns-9829 jessie_udp@dns-test-service.dns-9829.svc jessie_tcp@dns-test-service.dns-9829.svc jessie_udp@_http._tcp.dns-test-service.dns-9829.svc jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc]

Aug 21 12:44:54.095: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.100: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.104: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.109: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.114: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.118: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.123: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.128: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.160: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.170: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.174: INFO: Unable to read jessie_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.177: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.180: INFO: Unable to read jessie_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.183: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.187: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.191: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:54.231: INFO: Lookups using dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9829 wheezy_tcp@dns-test-service.dns-9829 wheezy_udp@dns-test-service.dns-9829.svc wheezy_tcp@dns-test-service.dns-9829.svc wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9829 jessie_tcp@dns-test-service.dns-9829 jessie_udp@dns-test-service.dns-9829.svc jessie_tcp@dns-test-service.dns-9829.svc jessie_udp@_http._tcp.dns-test-service.dns-9829.svc jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc]

Aug 21 12:44:59.095: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.100: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.105: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.110: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.115: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.119: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.123: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.126: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.154: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.159: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.164: INFO: Unable to read jessie_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.168: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.173: INFO: Unable to read jessie_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.178: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.182: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.186: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:44:59.213: INFO: Lookups using dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9829 wheezy_tcp@dns-test-service.dns-9829 wheezy_udp@dns-test-service.dns-9829.svc wheezy_tcp@dns-test-service.dns-9829.svc wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9829 jessie_tcp@dns-test-service.dns-9829 jessie_udp@dns-test-service.dns-9829.svc jessie_tcp@dns-test-service.dns-9829.svc jessie_udp@_http._tcp.dns-test-service.dns-9829.svc jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc]

Aug 21 12:45:04.096: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.101: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.106: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.111: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.115: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.119: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.123: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.137: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.355: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.359: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.363: INFO: Unable to read jessie_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.370: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.374: INFO: Unable to read jessie_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.377: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.380: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.384: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:04.403: INFO: Lookups using dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9829 wheezy_tcp@dns-test-service.dns-9829 wheezy_udp@dns-test-service.dns-9829.svc wheezy_tcp@dns-test-service.dns-9829.svc wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9829 jessie_tcp@dns-test-service.dns-9829 jessie_udp@dns-test-service.dns-9829.svc jessie_tcp@dns-test-service.dns-9829.svc jessie_udp@_http._tcp.dns-test-service.dns-9829.svc jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc]

Aug 21 12:45:09.126: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.137: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.142: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.147: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.151: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.158: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.162: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.187: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.191: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.195: INFO: Unable to read jessie_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.202: INFO: Unable to read jessie_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.215: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.219: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.269: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:09.301: INFO: Lookups using dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9829 wheezy_tcp@dns-test-service.dns-9829 wheezy_udp@dns-test-service.dns-9829.svc wheezy_tcp@dns-test-service.dns-9829.svc wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9829 jessie_tcp@dns-test-service.dns-9829 jessie_udp@dns-test-service.dns-9829.svc jessie_tcp@dns-test-service.dns-9829.svc jessie_udp@_http._tcp.dns-test-service.dns-9829.svc jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc]

Aug 21 12:45:14.126: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.130: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.135: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.140: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.145: INFO: Unable to read wheezy_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.149: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.153: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.157: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.190: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.195: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.200: INFO: Unable to read jessie_udp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.204: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829 from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.209: INFO: Unable to read jessie_udp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.214: INFO: Unable to read jessie_tcp@dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.219: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.223: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc from pod dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335: the server could not find the requested resource (get pods dns-test-16a71217-fb24-4410-a569-bed1be0ca335)
Aug 21 12:45:14.249: INFO: Lookups using dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9829 wheezy_tcp@dns-test-service.dns-9829 wheezy_udp@dns-test-service.dns-9829.svc wheezy_tcp@dns-test-service.dns-9829.svc wheezy_udp@_http._tcp.dns-test-service.dns-9829.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9829.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9829 jessie_tcp@dns-test-service.dns-9829 jessie_udp@dns-test-service.dns-9829.svc jessie_tcp@dns-test-service.dns-9829.svc jessie_udp@_http._tcp.dns-test-service.dns-9829.svc jessie_tcp@_http._tcp.dns-test-service.dns-9829.svc]

Aug 21 12:45:19.248: INFO: DNS probes using dns-9829/dns-test-16a71217-fb24-4410-a569-bed1be0ca335 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:45:20.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9829" for this suite.

• [SLOW TEST:45.347 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":138,"skipped":2336,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:45:20.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:45:53.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2893" for this suite.

• [SLOW TEST:32.792 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2356,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:45:53.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:45:56.258: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:45:58.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610756, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610756, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610756, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610756, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:46:01.511: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:46:01.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1854" for this suite.
STEP: Destroying namespace "webhook-1854-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.730 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":140,"skipped":2372,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:46:01.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:46:01.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 21 12:46:21.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3209 create -f -'
Aug 21 12:46:25.831: INFO: stderr: ""
Aug 21 12:46:25.831: INFO: stdout: "e2e-test-crd-publish-openapi-8007-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 21 12:46:25.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3209 delete e2e-test-crd-publish-openapi-8007-crds test-cr'
Aug 21 12:46:27.067: INFO: stderr: ""
Aug 21 12:46:27.067: INFO: stdout: "e2e-test-crd-publish-openapi-8007-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 21 12:46:27.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3209 apply -f -'
Aug 21 12:46:28.557: INFO: stderr: ""
Aug 21 12:46:28.557: INFO: stdout: "e2e-test-crd-publish-openapi-8007-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 21 12:46:28.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3209 delete e2e-test-crd-publish-openapi-8007-crds test-cr'
Aug 21 12:46:29.789: INFO: stderr: ""
Aug 21 12:46:29.789: INFO: stdout: "e2e-test-crd-publish-openapi-8007-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 21 12:46:29.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8007-crds'
Aug 21 12:46:31.259: INFO: stderr: ""
Aug 21 12:46:31.259: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8007-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:46:51.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3209" for this suite.

• [SLOW TEST:49.615 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":141,"skipped":2397,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:46:51.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:46:53.714: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:46:55.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610813, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610813, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610813, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610813, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:46:58.772: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 21 12:46:58.839: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:46:58.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3605" for this suite.
STEP: Destroying namespace "webhook-3605-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.585 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":142,"skipped":2409,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:46:58.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 21 12:46:59.032: INFO: Waiting up to 5m0s for pod "pod-2c43fb8a-b0f8-434b-ba60-38c5f87e5384" in namespace "emptydir-3213" to be "Succeeded or Failed"
Aug 21 12:46:59.085: INFO: Pod "pod-2c43fb8a-b0f8-434b-ba60-38c5f87e5384": Phase="Pending", Reason="", readiness=false. Elapsed: 52.329868ms
Aug 21 12:47:01.092: INFO: Pod "pod-2c43fb8a-b0f8-434b-ba60-38c5f87e5384": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059330582s
Aug 21 12:47:03.096: INFO: Pod "pod-2c43fb8a-b0f8-434b-ba60-38c5f87e5384": Phase="Running", Reason="", readiness=true. Elapsed: 4.063763739s
Aug 21 12:47:05.103: INFO: Pod "pod-2c43fb8a-b0f8-434b-ba60-38c5f87e5384": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070176529s
STEP: Saw pod success
Aug 21 12:47:05.103: INFO: Pod "pod-2c43fb8a-b0f8-434b-ba60-38c5f87e5384" satisfied condition "Succeeded or Failed"
Aug 21 12:47:05.107: INFO: Trying to get logs from node kali-worker pod pod-2c43fb8a-b0f8-434b-ba60-38c5f87e5384 container test-container: 
STEP: delete the pod
Aug 21 12:47:05.181: INFO: Waiting for pod pod-2c43fb8a-b0f8-434b-ba60-38c5f87e5384 to disappear
Aug 21 12:47:05.188: INFO: Pod pod-2c43fb8a-b0f8-434b-ba60-38c5f87e5384 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:47:05.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3213" for this suite.

• [SLOW TEST:6.235 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2440,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:47:05.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-5203, will wait for the garbage collector to delete the pods
Aug 21 12:47:11.395: INFO: Deleting Job.batch foo took: 7.903969ms
Aug 21 12:47:11.796: INFO: Terminating Job.batch foo pods took: 400.672424ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:47:49.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5203" for this suite.

• [SLOW TEST:43.969 seconds]
[sig-apps] Job
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":144,"skipped":2462,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:47:49.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 21 12:47:57.351: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 12:47:57.374: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 12:47:59.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 12:47:59.380: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 12:48:01.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 12:48:01.380: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 12:48:03.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 12:48:03.379: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 12:48:05.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 12:48:05.394: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 12:48:07.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 12:48:07.399: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 21 12:48:09.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 21 12:48:09.379: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:48:09.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-345" for this suite.

• [SLOW TEST:20.219 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2471,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:48:09.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:48:11.390: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:48:13.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610891, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610891, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610891, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610891, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:48:15.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610891, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610891, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610891, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733610891, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:48:18.710: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:48:18.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-365-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:48:19.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1858" for this suite.
STEP: Destroying namespace "webhook-1858-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.640 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":146,"skipped":2486,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:48:20.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Aug 21 12:48:20.085: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 21 12:48:20.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7011'
Aug 21 12:48:21.757: INFO: stderr: ""
Aug 21 12:48:21.757: INFO: stdout: "service/agnhost-slave created\n"
Aug 21 12:48:21.758: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 21 12:48:21.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7011'
Aug 21 12:48:23.321: INFO: stderr: ""
Aug 21 12:48:23.321: INFO: stdout: "service/agnhost-master created\n"
Aug 21 12:48:23.323: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 21 12:48:23.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7011'
Aug 21 12:48:24.888: INFO: stderr: ""
Aug 21 12:48:24.888: INFO: stdout: "service/frontend created\n"
Aug 21 12:48:24.890: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 21 12:48:24.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7011'
Aug 21 12:48:26.479: INFO: stderr: ""
Aug 21 12:48:26.479: INFO: stdout: "deployment.apps/frontend created\n"
Aug 21 12:48:26.481: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 21 12:48:26.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7011'
Aug 21 12:48:28.048: INFO: stderr: ""
Aug 21 12:48:28.048: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 21 12:48:28.049: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 21 12:48:28.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7011'
Aug 21 12:48:30.387: INFO: stderr: ""
Aug 21 12:48:30.387: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 21 12:48:30.387: INFO: Waiting for all frontend pods to be Running.
Aug 21 12:48:35.438: INFO: Waiting for frontend to serve content.
Aug 21 12:48:36.464: INFO: Trying to add a new entry to the guestbook.
Aug 21 12:48:36.476: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 21 12:48:36.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7011'
Aug 21 12:48:37.678: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 12:48:37.678: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 12:48:37.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7011'
Aug 21 12:48:38.925: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 12:48:38.925: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 12:48:38.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7011'
Aug 21 12:48:40.172: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 12:48:40.172: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 12:48:40.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7011'
Aug 21 12:48:41.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 12:48:41.379: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 12:48:41.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7011'
Aug 21 12:48:42.775: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 12:48:42.775: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 12:48:42.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7011'
Aug 21 12:48:43.968: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 12:48:43.969: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:48:43.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7011" for this suite.

• [SLOW TEST:23.997 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":147,"skipped":2494,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:48:44.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 12:48:44.521: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:44.533: INFO: Number of nodes with available pods: 0
Aug 21 12:48:44.533: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:48:45.543: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:45.564: INFO: Number of nodes with available pods: 0
Aug 21 12:48:45.564: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:48:46.543: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:46.548: INFO: Number of nodes with available pods: 0
Aug 21 12:48:46.548: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:48:47.539: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:47.543: INFO: Number of nodes with available pods: 0
Aug 21 12:48:47.543: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:48:48.546: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:48.551: INFO: Number of nodes with available pods: 0
Aug 21 12:48:48.551: INFO: Node kali-worker is running more than one daemon pod
Aug 21 12:48:49.563: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:49.578: INFO: Number of nodes with available pods: 2
Aug 21 12:48:49.578: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 21 12:48:49.651: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:49.663: INFO: Number of nodes with available pods: 1
Aug 21 12:48:49.664: INFO: Node kali-worker2 is running more than one daemon pod
Aug 21 12:48:50.675: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:50.687: INFO: Number of nodes with available pods: 1
Aug 21 12:48:50.687: INFO: Node kali-worker2 is running more than one daemon pod
Aug 21 12:48:51.705: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:51.709: INFO: Number of nodes with available pods: 1
Aug 21 12:48:51.709: INFO: Node kali-worker2 is running more than one daemon pod
Aug 21 12:48:52.677: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:52.681: INFO: Number of nodes with available pods: 1
Aug 21 12:48:52.681: INFO: Node kali-worker2 is running more than one daemon pod
Aug 21 12:48:53.675: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:53.681: INFO: Number of nodes with available pods: 1
Aug 21 12:48:53.681: INFO: Node kali-worker2 is running more than one daemon pod
Aug 21 12:48:54.675: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:54.680: INFO: Number of nodes with available pods: 1
Aug 21 12:48:54.680: INFO: Node kali-worker2 is running more than one daemon pod
Aug 21 12:48:55.675: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:55.682: INFO: Number of nodes with available pods: 1
Aug 21 12:48:55.682: INFO: Node kali-worker2 is running more than one daemon pod
Aug 21 12:48:56.675: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:56.680: INFO: Number of nodes with available pods: 1
Aug 21 12:48:56.681: INFO: Node kali-worker2 is running more than one daemon pod
Aug 21 12:48:57.673: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 12:48:57.678: INFO: Number of nodes with available pods: 2
Aug 21 12:48:57.678: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7398, will wait for the garbage collector to delete the pods
Aug 21 12:48:57.743: INFO: Deleting DaemonSet.extensions daemon-set took: 6.716072ms
Aug 21 12:48:57.844: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.376447ms
Aug 21 12:49:09.350: INFO: Number of nodes with available pods: 0
Aug 21 12:49:09.350: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 12:49:09.354: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7398/daemonsets","resourceVersion":"2122093"},"items":null}

Aug 21 12:49:09.358: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7398/pods","resourceVersion":"2122093"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:49:09.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7398" for this suite.

• [SLOW TEST:25.361 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":148,"skipped":2494,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:49:09.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:49:09.597: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b3d40f5-d753-470f-b21b-e5aab5cb5551" in namespace "downward-api-4727" to be "Succeeded or Failed"
Aug 21 12:49:09.688: INFO: Pod "downwardapi-volume-3b3d40f5-d753-470f-b21b-e5aab5cb5551": Phase="Pending", Reason="", readiness=false. Elapsed: 90.731011ms
Aug 21 12:49:11.696: INFO: Pod "downwardapi-volume-3b3d40f5-d753-470f-b21b-e5aab5cb5551": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098472582s
Aug 21 12:49:13.704: INFO: Pod "downwardapi-volume-3b3d40f5-d753-470f-b21b-e5aab5cb5551": Phase="Running", Reason="", readiness=true. Elapsed: 4.106581847s
Aug 21 12:49:15.711: INFO: Pod "downwardapi-volume-3b3d40f5-d753-470f-b21b-e5aab5cb5551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114251693s
STEP: Saw pod success
Aug 21 12:49:15.712: INFO: Pod "downwardapi-volume-3b3d40f5-d753-470f-b21b-e5aab5cb5551" satisfied condition "Succeeded or Failed"
Aug 21 12:49:15.718: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-3b3d40f5-d753-470f-b21b-e5aab5cb5551 container client-container: 
STEP: delete the pod
Aug 21 12:49:15.757: INFO: Waiting for pod downwardapi-volume-3b3d40f5-d753-470f-b21b-e5aab5cb5551 to disappear
Aug 21 12:49:15.761: INFO: Pod downwardapi-volume-3b3d40f5-d753-470f-b21b-e5aab5cb5551 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:49:15.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4727" for this suite.

• [SLOW TEST:6.377 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2514,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:49:15.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-2927
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 12:49:15.868: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 21 12:49:15.956: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 12:49:18.055: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 12:49:19.965: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:49:22.298: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:49:23.963: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:49:25.964: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:49:27.964: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 12:49:29.964: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 21 12:49:29.975: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 12:49:31.982: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 12:49:33.982: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 12:49:35.997: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 12:49:37.983: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 12:49:39.985: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 21 12:49:46.048: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.110:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2927 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:49:46.048: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:49:46.099006      10 log.go:172] (0x4002170370) (0x4000a20460) Create stream
I0821 12:49:46.099122      10 log.go:172] (0x4002170370) (0x4000a20460) Stream added, broadcasting: 1
I0821 12:49:46.102030      10 log.go:172] (0x4002170370) Reply frame received for 1
I0821 12:49:46.102147      10 log.go:172] (0x4002170370) (0x400164a8c0) Create stream
I0821 12:49:46.102212      10 log.go:172] (0x4002170370) (0x400164a8c0) Stream added, broadcasting: 3
I0821 12:49:46.103388      10 log.go:172] (0x4002170370) Reply frame received for 3
I0821 12:49:46.103525      10 log.go:172] (0x4002170370) (0x400164ac80) Create stream
I0821 12:49:46.103603      10 log.go:172] (0x4002170370) (0x400164ac80) Stream added, broadcasting: 5
I0821 12:49:46.104661      10 log.go:172] (0x4002170370) Reply frame received for 5
I0821 12:49:46.164162      10 log.go:172] (0x4002170370) Data frame received for 3
I0821 12:49:46.164320      10 log.go:172] (0x4002170370) Data frame received for 5
I0821 12:49:46.164452      10 log.go:172] (0x400164ac80) (5) Data frame handling
I0821 12:49:46.164559      10 log.go:172] (0x400164a8c0) (3) Data frame handling
I0821 12:49:46.164688      10 log.go:172] (0x400164a8c0) (3) Data frame sent
I0821 12:49:46.164880      10 log.go:172] (0x4002170370) Data frame received for 3
I0821 12:49:46.164968      10 log.go:172] (0x400164a8c0) (3) Data frame handling
I0821 12:49:46.165727      10 log.go:172] (0x4002170370) Data frame received for 1
I0821 12:49:46.165901      10 log.go:172] (0x4000a20460) (1) Data frame handling
I0821 12:49:46.166018      10 log.go:172] (0x4000a20460) (1) Data frame sent
I0821 12:49:46.166237      10 log.go:172] (0x4002170370) (0x4000a20460) Stream removed, broadcasting: 1
I0821 12:49:46.166415      10 log.go:172] (0x4002170370) Go away received
I0821 12:49:46.166803      10 log.go:172] (0x4002170370) (0x4000a20460) Stream removed, broadcasting: 1
I0821 12:49:46.166928      10 log.go:172] (0x4002170370) (0x400164a8c0) Stream removed, broadcasting: 3
I0821 12:49:46.167051      10 log.go:172] (0x4002170370) (0x400164ac80) Stream removed, broadcasting: 5
Aug 21 12:49:46.167: INFO: Found all expected endpoints: [netserver-0]
Aug 21 12:49:46.172: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.190:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2927 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 12:49:46.172: INFO: >>> kubeConfig: /root/.kube/config
I0821 12:49:46.234151      10 log.go:172] (0x4002caa790) (0x4001260dc0) Create stream
I0821 12:49:46.234257      10 log.go:172] (0x4002caa790) (0x4001260dc0) Stream added, broadcasting: 1
I0821 12:49:46.237819      10 log.go:172] (0x4002caa790) Reply frame received for 1
I0821 12:49:46.238058      10 log.go:172] (0x4002caa790) (0x40010f4280) Create stream
I0821 12:49:46.238149      10 log.go:172] (0x4002caa790) (0x40010f4280) Stream added, broadcasting: 3
I0821 12:49:46.239727      10 log.go:172] (0x4002caa790) Reply frame received for 3
I0821 12:49:46.239837      10 log.go:172] (0x4002caa790) (0x4001260e60) Create stream
I0821 12:49:46.239910      10 log.go:172] (0x4002caa790) (0x4001260e60) Stream added, broadcasting: 5
I0821 12:49:46.241176      10 log.go:172] (0x4002caa790) Reply frame received for 5
I0821 12:49:46.306755      10 log.go:172] (0x4002caa790) Data frame received for 3
I0821 12:49:46.306929      10 log.go:172] (0x40010f4280) (3) Data frame handling
I0821 12:49:46.307112      10 log.go:172] (0x40010f4280) (3) Data frame sent
I0821 12:49:46.307228      10 log.go:172] (0x4002caa790) Data frame received for 3
I0821 12:49:46.307336      10 log.go:172] (0x40010f4280) (3) Data frame handling
I0821 12:49:46.307578      10 log.go:172] (0x4002caa790) Data frame received for 5
I0821 12:49:46.307786      10 log.go:172] (0x4001260e60) (5) Data frame handling
I0821 12:49:46.308586      10 log.go:172] (0x4002caa790) Data frame received for 1
I0821 12:49:46.308844      10 log.go:172] (0x4001260dc0) (1) Data frame handling
I0821 12:49:46.309028      10 log.go:172] (0x4001260dc0) (1) Data frame sent
I0821 12:49:46.309157      10 log.go:172] (0x4002caa790) (0x4001260dc0) Stream removed, broadcasting: 1
I0821 12:49:46.309286      10 log.go:172] (0x4002caa790) Go away received
I0821 12:49:46.309664      10 log.go:172] (0x4002caa790) (0x4001260dc0) Stream removed, broadcasting: 1
I0821 12:49:46.309786      10 log.go:172] (0x4002caa790) (0x40010f4280) Stream removed, broadcasting: 3
I0821 12:49:46.309886      10 log.go:172] (0x4002caa790) (0x4001260e60) Stream removed, broadcasting: 5
Aug 21 12:49:46.309: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:49:46.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2927" for this suite.

• [SLOW TEST:30.546 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2520,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:49:46.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-7994ab57-a72e-4f6a-83a0-8bcadb5921d3
STEP: Creating a pod to test consume secrets
Aug 21 12:49:46.429: INFO: Waiting up to 5m0s for pod "pod-secrets-1140c324-9625-4a3e-837e-183a7a6a236e" in namespace "secrets-5043" to be "Succeeded or Failed"
Aug 21 12:49:46.464: INFO: Pod "pod-secrets-1140c324-9625-4a3e-837e-183a7a6a236e": Phase="Pending", Reason="", readiness=false. Elapsed: 34.075402ms
Aug 21 12:49:48.470: INFO: Pod "pod-secrets-1140c324-9625-4a3e-837e-183a7a6a236e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040413187s
Aug 21 12:49:50.475: INFO: Pod "pod-secrets-1140c324-9625-4a3e-837e-183a7a6a236e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045802229s
Aug 21 12:49:52.704: INFO: Pod "pod-secrets-1140c324-9625-4a3e-837e-183a7a6a236e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.274628464s
STEP: Saw pod success
Aug 21 12:49:52.704: INFO: Pod "pod-secrets-1140c324-9625-4a3e-837e-183a7a6a236e" satisfied condition "Succeeded or Failed"
Aug 21 12:49:52.709: INFO: Trying to get logs from node kali-worker pod pod-secrets-1140c324-9625-4a3e-837e-183a7a6a236e container secret-volume-test: 
STEP: delete the pod
Aug 21 12:49:52.790: INFO: Waiting for pod pod-secrets-1140c324-9625-4a3e-837e-183a7a6a236e to disappear
Aug 21 12:49:52.871: INFO: Pod pod-secrets-1140c324-9625-4a3e-837e-183a7a6a236e no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:49:52.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5043" for this suite.

• [SLOW TEST:6.559 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2542,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:49:52.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-81a8b732-27c8-423b-be18-800a1cb9cbc3 in namespace container-probe-1056
Aug 21 12:49:57.346: INFO: Started pod liveness-81a8b732-27c8-423b-be18-800a1cb9cbc3 in namespace container-probe-1056
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 12:49:57.359: INFO: Initial restart count of pod liveness-81a8b732-27c8-423b-be18-800a1cb9cbc3 is 0
Aug 21 12:50:23.543: INFO: Restart count of pod container-probe-1056/liveness-81a8b732-27c8-423b-be18-800a1cb9cbc3 is now 1 (26.183893327s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:50:23.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1056" for this suite.

• [SLOW TEST:30.715 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2542,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:50:23.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Aug 21 12:50:24.039: INFO: Waiting up to 5m0s for pod "var-expansion-d8d43062-4067-4062-8db5-906af6f22d72" in namespace "var-expansion-2351" to be "Succeeded or Failed"
Aug 21 12:50:24.073: INFO: Pod "var-expansion-d8d43062-4067-4062-8db5-906af6f22d72": Phase="Pending", Reason="", readiness=false. Elapsed: 33.258629ms
Aug 21 12:50:26.102: INFO: Pod "var-expansion-d8d43062-4067-4062-8db5-906af6f22d72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062858022s
Aug 21 12:50:28.111: INFO: Pod "var-expansion-d8d43062-4067-4062-8db5-906af6f22d72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07119991s
Aug 21 12:50:30.173: INFO: Pod "var-expansion-d8d43062-4067-4062-8db5-906af6f22d72": Phase="Running", Reason="", readiness=true. Elapsed: 6.133765019s
Aug 21 12:50:32.180: INFO: Pod "var-expansion-d8d43062-4067-4062-8db5-906af6f22d72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.14030521s
STEP: Saw pod success
Aug 21 12:50:32.180: INFO: Pod "var-expansion-d8d43062-4067-4062-8db5-906af6f22d72" satisfied condition "Succeeded or Failed"
Aug 21 12:50:32.185: INFO: Trying to get logs from node kali-worker pod var-expansion-d8d43062-4067-4062-8db5-906af6f22d72 container dapi-container: 
STEP: delete the pod
Aug 21 12:50:32.224: INFO: Waiting for pod var-expansion-d8d43062-4067-4062-8db5-906af6f22d72 to disappear
Aug 21 12:50:32.251: INFO: Pod var-expansion-d8d43062-4067-4062-8db5-906af6f22d72 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:50:32.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2351" for this suite.

• [SLOW TEST:8.668 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2552,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:50:32.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:50:32.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8870" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":154,"skipped":2552,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:50:32.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 21 12:50:37.115: INFO: Successfully updated pod "pod-update-85332ab5-a43d-4ced-811c-e73ef192cb3c"
STEP: verifying the updated pod is in kubernetes
Aug 21 12:50:37.141: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:50:37.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5660" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2559,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:50:37.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-93ada377-44a5-477a-94b0-0e816839a54a
STEP: Creating a pod to test consume configMaps
Aug 21 12:50:37.278: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5d7a1bd9-1097-4fa1-8e72-996b6c3441fe" in namespace "projected-7638" to be "Succeeded or Failed"
Aug 21 12:50:37.310: INFO: Pod "pod-projected-configmaps-5d7a1bd9-1097-4fa1-8e72-996b6c3441fe": Phase="Pending", Reason="", readiness=false. Elapsed: 31.657285ms
Aug 21 12:50:39.316: INFO: Pod "pod-projected-configmaps-5d7a1bd9-1097-4fa1-8e72-996b6c3441fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038170822s
Aug 21 12:50:41.324: INFO: Pod "pod-projected-configmaps-5d7a1bd9-1097-4fa1-8e72-996b6c3441fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046252029s
STEP: Saw pod success
Aug 21 12:50:41.325: INFO: Pod "pod-projected-configmaps-5d7a1bd9-1097-4fa1-8e72-996b6c3441fe" satisfied condition "Succeeded or Failed"
Aug 21 12:50:41.329: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-5d7a1bd9-1097-4fa1-8e72-996b6c3441fe container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 12:50:41.385: INFO: Waiting for pod pod-projected-configmaps-5d7a1bd9-1097-4fa1-8e72-996b6c3441fe to disappear
Aug 21 12:50:41.407: INFO: Pod pod-projected-configmaps-5d7a1bd9-1097-4fa1-8e72-996b6c3441fe no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:50:41.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7638" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2580,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:50:41.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:50:49.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7334" for this suite.

• [SLOW TEST:8.331 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2582,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:50:49.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Aug 21 12:50:49.985: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Aug 21 12:50:50.133: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 21 12:50:50.135: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Aug 21 12:50:50.180: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 21 12:50:50.181: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Aug 21 12:50:50.311: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Aug 21 12:50:50.312: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Aug 21 12:50:58.094: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:50:58.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-5228" for this suite.

• [SLOW TEST:8.417 seconds]
[sig-scheduling] LimitRange
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":158,"skipped":2600,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:50:58.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:50:58.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344" in namespace "downward-api-8724" to be "Succeeded or Failed"
Aug 21 12:50:58.400: INFO: Pod "downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344": Phase="Pending", Reason="", readiness=false. Elapsed: 60.389133ms
Aug 21 12:51:00.541: INFO: Pod "downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201359056s
Aug 21 12:51:02.670: INFO: Pod "downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330447165s
Aug 21 12:51:04.899: INFO: Pod "downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344": Phase="Running", Reason="", readiness=true. Elapsed: 6.558893372s
Aug 21 12:51:06.945: INFO: Pod "downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.605030243s
STEP: Saw pod success
Aug 21 12:51:06.945: INFO: Pod "downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344" satisfied condition "Succeeded or Failed"
Aug 21 12:51:06.952: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344 container client-container: 
STEP: delete the pod
Aug 21 12:51:07.192: INFO: Waiting for pod downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344 to disappear
Aug 21 12:51:07.214: INFO: Pod downwardapi-volume-f5392a83-21f6-4eb9-9b1e-352346055344 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:51:07.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8724" for this suite.

• [SLOW TEST:9.152 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2610,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:51:07.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2783.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2783.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2783.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2783.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2783.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2783.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 12:51:16.393: INFO: DNS probes using dns-2783/dns-test-d9efb2fb-520d-44a9-be83-e1b3841bd9bf succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:51:16.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2783" for this suite.

• [SLOW TEST:9.177 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":160,"skipped":2638,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:51:16.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 21 12:51:16.903: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:51:24.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3486" for this suite.

• [SLOW TEST:7.957 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":161,"skipped":2655,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:51:24.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 21 12:51:25.192: INFO: Waiting up to 5m0s for pod "pod-11d82729-cb46-4398-94d2-7f039d18bd78" in namespace "emptydir-9841" to be "Succeeded or Failed"
Aug 21 12:51:25.239: INFO: Pod "pod-11d82729-cb46-4398-94d2-7f039d18bd78": Phase="Pending", Reason="", readiness=false. Elapsed: 46.458751ms
Aug 21 12:51:27.317: INFO: Pod "pod-11d82729-cb46-4398-94d2-7f039d18bd78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124662566s
Aug 21 12:51:29.325: INFO: Pod "pod-11d82729-cb46-4398-94d2-7f039d18bd78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132192382s
Aug 21 12:51:31.333: INFO: Pod "pod-11d82729-cb46-4398-94d2-7f039d18bd78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140064917s
STEP: Saw pod success
Aug 21 12:51:31.333: INFO: Pod "pod-11d82729-cb46-4398-94d2-7f039d18bd78" satisfied condition "Succeeded or Failed"
Aug 21 12:51:31.338: INFO: Trying to get logs from node kali-worker pod pod-11d82729-cb46-4398-94d2-7f039d18bd78 container test-container: 
STEP: delete the pod
Aug 21 12:51:31.388: INFO: Waiting for pod pod-11d82729-cb46-4398-94d2-7f039d18bd78 to disappear
Aug 21 12:51:31.403: INFO: Pod pod-11d82729-cb46-4398-94d2-7f039d18bd78 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:51:31.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9841" for this suite.

• [SLOW TEST:6.993 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2657,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:51:31.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:51:38.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7694" for this suite.

• [SLOW TEST:7.123 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":163,"skipped":2689,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:51:38.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 21 12:51:46.370: INFO: 10 pods remaining
Aug 21 12:51:46.371: INFO: 8 pods has nil DeletionTimestamp
Aug 21 12:51:46.371: INFO: 
Aug 21 12:51:47.948: INFO: 0 pods remaining
Aug 21 12:51:47.948: INFO: 0 pods has nil DeletionTimestamp
Aug 21 12:51:47.949: INFO: 
STEP: Gathering metrics
W0821 12:51:50.298000      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 12:51:50.298: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:51:50.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1802" for this suite.

• [SLOW TEST:12.805 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":164,"skipped":2713,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:51:51.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Aug 21 12:51:52.266: INFO: Waiting up to 5m0s for pod "var-expansion-906a2b5f-b9f1-4f6f-b02d-f1a0706325dd" in namespace "var-expansion-383" to be "Succeeded or Failed"
Aug 21 12:51:52.551: INFO: Pod "var-expansion-906a2b5f-b9f1-4f6f-b02d-f1a0706325dd": Phase="Pending", Reason="", readiness=false. Elapsed: 284.399969ms
Aug 21 12:51:54.559: INFO: Pod "var-expansion-906a2b5f-b9f1-4f6f-b02d-f1a0706325dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292224237s
Aug 21 12:51:56.598: INFO: Pod "var-expansion-906a2b5f-b9f1-4f6f-b02d-f1a0706325dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331849753s
STEP: Saw pod success
Aug 21 12:51:56.598: INFO: Pod "var-expansion-906a2b5f-b9f1-4f6f-b02d-f1a0706325dd" satisfied condition "Succeeded or Failed"
Aug 21 12:51:56.603: INFO: Trying to get logs from node kali-worker pod var-expansion-906a2b5f-b9f1-4f6f-b02d-f1a0706325dd container dapi-container: 
STEP: delete the pod
Aug 21 12:51:57.268: INFO: Waiting for pod var-expansion-906a2b5f-b9f1-4f6f-b02d-f1a0706325dd to disappear
Aug 21 12:51:57.278: INFO: Pod var-expansion-906a2b5f-b9f1-4f6f-b02d-f1a0706325dd no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:51:57.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-383" for this suite.

• [SLOW TEST:5.912 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2715,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:51:57.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 21 12:51:57.568: INFO: Waiting up to 5m0s for pod "downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6" in namespace "downward-api-1858" to be "Succeeded or Failed"
Aug 21 12:51:57.639: INFO: Pod "downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6": Phase="Pending", Reason="", readiness=false. Elapsed: 71.202079ms
Aug 21 12:51:59.715: INFO: Pod "downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146472499s
Aug 21 12:52:01.815: INFO: Pod "downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24679229s
Aug 21 12:52:03.821: INFO: Pod "downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6": Phase="Running", Reason="", readiness=true. Elapsed: 6.253171781s
Aug 21 12:52:05.830: INFO: Pod "downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.262000991s
STEP: Saw pod success
Aug 21 12:52:05.831: INFO: Pod "downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6" satisfied condition "Succeeded or Failed"
Aug 21 12:52:05.838: INFO: Trying to get logs from node kali-worker pod downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6 container dapi-container: 
STEP: delete the pod
Aug 21 12:52:05.883: INFO: Waiting for pod downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6 to disappear
Aug 21 12:52:05.893: INFO: Pod downward-api-cec14e22-6a6a-4d11-9107-e9a6c27315e6 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:52:05.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1858" for this suite.

• [SLOW TEST:8.598 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2718,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:52:05.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-293dee18-682d-4f15-b10b-8be63c5b8340
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-293dee18-682d-4f15-b10b-8be63c5b8340
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:52:12.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9682" for this suite.

• [SLOW TEST:6.207 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2720,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:52:12.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 12:52:12.211: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1851daee-9bf3-4eda-9211-89595f5c1dd7" in namespace "downward-api-3640" to be "Succeeded or Failed"
Aug 21 12:52:12.229: INFO: Pod "downwardapi-volume-1851daee-9bf3-4eda-9211-89595f5c1dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.976766ms
Aug 21 12:52:14.271: INFO: Pod "downwardapi-volume-1851daee-9bf3-4eda-9211-89595f5c1dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059436463s
Aug 21 12:52:16.280: INFO: Pod "downwardapi-volume-1851daee-9bf3-4eda-9211-89595f5c1dd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067948273s
STEP: Saw pod success
Aug 21 12:52:16.280: INFO: Pod "downwardapi-volume-1851daee-9bf3-4eda-9211-89595f5c1dd7" satisfied condition "Succeeded or Failed"
Aug 21 12:52:16.285: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1851daee-9bf3-4eda-9211-89595f5c1dd7 container client-container: 
STEP: delete the pod
Aug 21 12:52:16.317: INFO: Waiting for pod downwardapi-volume-1851daee-9bf3-4eda-9211-89595f5c1dd7 to disappear
Aug 21 12:52:16.346: INFO: Pod downwardapi-volume-1851daee-9bf3-4eda-9211-89595f5c1dd7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:52:16.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3640" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2743,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:52:16.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 12:52:19.541: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 12:52:21.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611139, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611139, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611139, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611139, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 12:52:23.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611139, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611139, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611139, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611139, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 12:52:26.804: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:52:27.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3373" for this suite.
STEP: Destroying namespace "webhook-3373-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.113 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":169,"skipped":2776,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:52:27.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:52:27.840: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:52:29.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9602" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":170,"skipped":2786,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:52:29.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9785.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9785.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 12:52:39.822: INFO: DNS probes using dns-9785/dns-test-554c4914-8039-43d4-90ce-ca124c91cfa7 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:52:39.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9785" for this suite.

• [SLOW TEST:10.554 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":171,"skipped":2840,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:52:40.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Aug 21 12:52:40.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config api-versions'
Aug 21 12:52:41.997: INFO: stderr: ""
Aug 21 12:52:41.998: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:52:41.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3429" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":172,"skipped":2900,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:52:42.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:52:42.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 21 12:53:02.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9450 create -f -'
Aug 21 12:53:06.395: INFO: stderr: ""
Aug 21 12:53:06.395: INFO: stdout: "e2e-test-crd-publish-openapi-3736-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 21 12:53:06.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9450 delete e2e-test-crd-publish-openapi-3736-crds test-cr'
Aug 21 12:53:07.651: INFO: stderr: ""
Aug 21 12:53:07.651: INFO: stdout: "e2e-test-crd-publish-openapi-3736-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 21 12:53:07.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9450 apply -f -'
Aug 21 12:53:09.267: INFO: stderr: ""
Aug 21 12:53:09.267: INFO: stdout: "e2e-test-crd-publish-openapi-3736-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 21 12:53:09.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9450 delete e2e-test-crd-publish-openapi-3736-crds test-cr'
Aug 21 12:53:10.534: INFO: stderr: ""
Aug 21 12:53:10.534: INFO: stdout: "e2e-test-crd-publish-openapi-3736-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 21 12:53:10.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3736-crds'
Aug 21 12:53:12.067: INFO: stderr: ""
Aug 21 12:53:12.067: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3736-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:53:31.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9450" for this suite.

• [SLOW TEST:49.680 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":173,"skipped":2915,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:53:31.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-434c1716-7dda-4b06-b8f2-6f86a36fa2a9
STEP: Creating a pod to test consume configMaps
Aug 21 12:53:31.879: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e65dd9e2-f3da-4d4f-9aa1-869eafcce32b" in namespace "projected-4335" to be "Succeeded or Failed"
Aug 21 12:53:31.902: INFO: Pod "pod-projected-configmaps-e65dd9e2-f3da-4d4f-9aa1-869eafcce32b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.863375ms
Aug 21 12:53:33.907: INFO: Pod "pod-projected-configmaps-e65dd9e2-f3da-4d4f-9aa1-869eafcce32b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028518204s
Aug 21 12:53:35.914: INFO: Pod "pod-projected-configmaps-e65dd9e2-f3da-4d4f-9aa1-869eafcce32b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034996271s
STEP: Saw pod success
Aug 21 12:53:35.914: INFO: Pod "pod-projected-configmaps-e65dd9e2-f3da-4d4f-9aa1-869eafcce32b" satisfied condition "Succeeded or Failed"
Aug 21 12:53:35.919: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-e65dd9e2-f3da-4d4f-9aa1-869eafcce32b container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 12:53:35.981: INFO: Waiting for pod pod-projected-configmaps-e65dd9e2-f3da-4d4f-9aa1-869eafcce32b to disappear
Aug 21 12:53:36.024: INFO: Pod pod-projected-configmaps-e65dd9e2-f3da-4d4f-9aa1-869eafcce32b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:53:36.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4335" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2950,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:53:36.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-fe45605b-0fca-42e1-9d5e-c71bf1029f9f
STEP: Creating a pod to test consume secrets
Aug 21 12:53:36.268: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee999b3f-414f-4968-a75a-1a44ec8e136c" in namespace "projected-4682" to be "Succeeded or Failed"
Aug 21 12:53:36.273: INFO: Pod "pod-projected-secrets-ee999b3f-414f-4968-a75a-1a44ec8e136c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.716905ms
Aug 21 12:53:38.278: INFO: Pod "pod-projected-secrets-ee999b3f-414f-4968-a75a-1a44ec8e136c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00994483s
Aug 21 12:53:40.285: INFO: Pod "pod-projected-secrets-ee999b3f-414f-4968-a75a-1a44ec8e136c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017172734s
STEP: Saw pod success
Aug 21 12:53:40.286: INFO: Pod "pod-projected-secrets-ee999b3f-414f-4968-a75a-1a44ec8e136c" satisfied condition "Succeeded or Failed"
Aug 21 12:53:40.291: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-ee999b3f-414f-4968-a75a-1a44ec8e136c container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 12:53:40.340: INFO: Waiting for pod pod-projected-secrets-ee999b3f-414f-4968-a75a-1a44ec8e136c to disappear
Aug 21 12:53:40.354: INFO: Pod pod-projected-secrets-ee999b3f-414f-4968-a75a-1a44ec8e136c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:53:40.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4682" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":2950,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:53:40.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Aug 21 12:53:40.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:55:30.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7498" for this suite.

• [SLOW TEST:109.760 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":176,"skipped":2964,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:55:30.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 21 12:55:30.216: INFO: Waiting up to 5m0s for pod "pod-f4b6836f-501e-4d9b-97a5-9f847e22e3e3" in namespace "emptydir-4571" to be "Succeeded or Failed"
Aug 21 12:55:30.221: INFO: Pod "pod-f4b6836f-501e-4d9b-97a5-9f847e22e3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.426332ms
Aug 21 12:55:32.227: INFO: Pod "pod-f4b6836f-501e-4d9b-97a5-9f847e22e3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011456386s
Aug 21 12:55:34.517: INFO: Pod "pod-f4b6836f-501e-4d9b-97a5-9f847e22e3e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301644205s
Aug 21 12:55:36.524: INFO: Pod "pod-f4b6836f-501e-4d9b-97a5-9f847e22e3e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.308177387s
STEP: Saw pod success
Aug 21 12:55:36.524: INFO: Pod "pod-f4b6836f-501e-4d9b-97a5-9f847e22e3e3" satisfied condition "Succeeded or Failed"
Aug 21 12:55:36.529: INFO: Trying to get logs from node kali-worker2 pod pod-f4b6836f-501e-4d9b-97a5-9f847e22e3e3 container test-container: 
STEP: delete the pod
Aug 21 12:55:36.576: INFO: Waiting for pod pod-f4b6836f-501e-4d9b-97a5-9f847e22e3e3 to disappear
Aug 21 12:55:36.592: INFO: Pod pod-f4b6836f-501e-4d9b-97a5-9f847e22e3e3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:55:36.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4571" for this suite.

• [SLOW TEST:6.468 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":2983,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:55:36.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7570
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7570
STEP: Creating statefulset with conflicting port in namespace statefulset-7570
STEP: Waiting until pod test-pod will start running in namespace statefulset-7570
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7570
Aug 21 12:55:40.914: INFO: Observed stateful pod in namespace: statefulset-7570, name: ss-0, uid: 3643d72f-244d-4be4-b844-cbcfff25493f, status phase: Pending. Waiting for statefulset controller to delete.
Aug 21 12:55:41.002: INFO: Observed stateful pod in namespace: statefulset-7570, name: ss-0, uid: 3643d72f-244d-4be4-b844-cbcfff25493f, status phase: Failed. Waiting for statefulset controller to delete.
Aug 21 12:55:41.025: INFO: Observed stateful pod in namespace: statefulset-7570, name: ss-0, uid: 3643d72f-244d-4be4-b844-cbcfff25493f, status phase: Failed. Waiting for statefulset controller to delete.
Aug 21 12:55:41.080: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7570
STEP: Removing pod with conflicting port in namespace statefulset-7570
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7570 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 21 12:55:45.186: INFO: Deleting all statefulset in ns statefulset-7570
Aug 21 12:55:45.190: INFO: Scaling statefulset ss to 0
Aug 21 12:56:05.214: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 12:56:05.219: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:56:05.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7570" for this suite.

• [SLOW TEST:28.652 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":178,"skipped":3093,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:56:05.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-rnmx
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 12:56:05.777: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rnmx" in namespace "subpath-5226" to be "Succeeded or Failed"
Aug 21 12:56:05.895: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Pending", Reason="", readiness=false. Elapsed: 118.662116ms
Aug 21 12:56:07.904: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126782464s
Aug 21 12:56:09.910: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 4.133019232s
Aug 21 12:56:11.917: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 6.140142236s
Aug 21 12:56:13.924: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 8.147545879s
Aug 21 12:56:15.931: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 10.154590595s
Aug 21 12:56:17.939: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 12.162414033s
Aug 21 12:56:19.946: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 14.169655709s
Aug 21 12:56:21.954: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 16.177305449s
Aug 21 12:56:23.961: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 18.184479724s
Aug 21 12:56:25.969: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 20.192532083s
Aug 21 12:56:27.977: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Running", Reason="", readiness=true. Elapsed: 22.200510953s
Aug 21 12:56:29.985: INFO: Pod "pod-subpath-test-secret-rnmx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.208010456s
STEP: Saw pod success
Aug 21 12:56:29.985: INFO: Pod "pod-subpath-test-secret-rnmx" satisfied condition "Succeeded or Failed"
Aug 21 12:56:30.244: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-rnmx container test-container-subpath-secret-rnmx: 
STEP: delete the pod
Aug 21 12:56:30.531: INFO: Waiting for pod pod-subpath-test-secret-rnmx to disappear
Aug 21 12:56:30.770: INFO: Pod pod-subpath-test-secret-rnmx no longer exists
STEP: Deleting pod pod-subpath-test-secret-rnmx
Aug 21 12:56:30.770: INFO: Deleting pod "pod-subpath-test-secret-rnmx" in namespace "subpath-5226"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:56:30.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5226" for this suite.

• [SLOW TEST:25.573 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":179,"skipped":3098,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:56:30.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 21 12:56:31.076: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:56:42.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1254" for this suite.

• [SLOW TEST:11.246 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":180,"skipped":3125,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:56:42.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:57:42.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7637" for this suite.

• [SLOW TEST:60.159 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3178,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:57:42.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Aug 21 12:57:51.148: INFO: Pod pod-hostip-3fb53f79-f793-47d0-bfe5-d5266c3d7762 has hostIP: 172.18.0.13
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:57:51.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-404" for this suite.

• [SLOW TEST:8.913 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3179,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:57:51.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-8858/configmap-test-18d526eb-7908-4a6f-a993-746ebd46c182
STEP: Creating a pod to test consume configMaps
Aug 21 12:57:51.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a" in namespace "configmap-8858" to be "Succeeded or Failed"
Aug 21 12:57:51.367: INFO: Pod "pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.30185ms
Aug 21 12:57:53.375: INFO: Pod "pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055137968s
Aug 21 12:57:55.473: INFO: Pod "pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153488115s
Aug 21 12:57:57.526: INFO: Pod "pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206933047s
Aug 21 12:57:59.588: INFO: Pod "pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.2681739s
STEP: Saw pod success
Aug 21 12:57:59.588: INFO: Pod "pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a" satisfied condition "Succeeded or Failed"
Aug 21 12:57:59.657: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a container env-test: 
STEP: delete the pod
Aug 21 12:58:00.118: INFO: Waiting for pod pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a to disappear
Aug 21 12:58:00.256: INFO: Pod pod-configmaps-79c6f854-9417-49db-a427-69d85f94742a no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:58:00.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8858" for this suite.

• [SLOW TEST:9.106 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3180,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:58:00.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-cc5504f5-e78a-421f-b338-ece4ba65de32
STEP: Creating a pod to test consume configMaps
Aug 21 12:58:00.724: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f" in namespace "projected-2927" to be "Succeeded or Failed"
Aug 21 12:58:01.026: INFO: Pod "pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 302.612335ms
Aug 21 12:58:03.034: INFO: Pod "pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310392932s
Aug 21 12:58:05.042: INFO: Pod "pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318218401s
Aug 21 12:58:07.050: INFO: Pod "pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f": Phase="Running", Reason="", readiness=true. Elapsed: 6.325956536s
Aug 21 12:58:09.073: INFO: Pod "pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.348725485s
STEP: Saw pod success
Aug 21 12:58:09.073: INFO: Pod "pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f" satisfied condition "Succeeded or Failed"
Aug 21 12:58:09.094: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 12:58:09.298: INFO: Waiting for pod pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f to disappear
Aug 21 12:58:09.388: INFO: Pod pod-projected-configmaps-7ab526ea-663c-49d8-9f4c-d382e25f1b3f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:58:09.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2927" for this suite.

• [SLOW TEST:9.134 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3189,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:58:09.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 12:58:09.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9154
I0821 12:58:09.569488      10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9154, replica count: 1
I0821 12:58:10.621074      10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 12:58:11.621871      10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 12:58:12.622559      10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 12:58:13.623250      10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 12:58:13.793: INFO: Created: latency-svc-4gvg6
Aug 21 12:58:13.841: INFO: Got endpoints: latency-svc-4gvg6 [115.752166ms]
Aug 21 12:58:13.988: INFO: Created: latency-svc-vtglr
Aug 21 12:58:14.012: INFO: Got endpoints: latency-svc-vtglr [170.161391ms]
Aug 21 12:58:14.125: INFO: Created: latency-svc-kcc22
Aug 21 12:58:14.151: INFO: Got endpoints: latency-svc-kcc22 [309.016746ms]
Aug 21 12:58:14.189: INFO: Created: latency-svc-xj7dg
Aug 21 12:58:14.282: INFO: Got endpoints: latency-svc-xj7dg [440.536764ms]
Aug 21 12:58:14.310: INFO: Created: latency-svc-dnjtk
Aug 21 12:58:14.325: INFO: Got endpoints: latency-svc-dnjtk [483.776491ms]
Aug 21 12:58:14.467: INFO: Created: latency-svc-kjlh9
Aug 21 12:58:14.502: INFO: Created: latency-svc-k28fq
Aug 21 12:58:14.503: INFO: Got endpoints: latency-svc-kjlh9 [661.586154ms]
Aug 21 12:58:14.518: INFO: Got endpoints: latency-svc-k28fq [673.770375ms]
Aug 21 12:58:14.550: INFO: Created: latency-svc-9bzkr
Aug 21 12:58:14.566: INFO: Got endpoints: latency-svc-9bzkr [724.160105ms]
Aug 21 12:58:14.652: INFO: Created: latency-svc-jbqpc
Aug 21 12:58:14.662: INFO: Got endpoints: latency-svc-jbqpc [819.836351ms]
Aug 21 12:58:14.727: INFO: Created: latency-svc-rvzn8
Aug 21 12:58:14.778: INFO: Got endpoints: latency-svc-rvzn8 [936.319219ms]
Aug 21 12:58:14.821: INFO: Created: latency-svc-89hpm
Aug 21 12:58:14.849: INFO: Got endpoints: latency-svc-89hpm [1.007119776s]
Aug 21 12:58:14.868: INFO: Created: latency-svc-svpwf
Aug 21 12:58:14.916: INFO: Got endpoints: latency-svc-svpwf [1.073425632s]
Aug 21 12:58:14.959: INFO: Created: latency-svc-njdqd
Aug 21 12:58:14.976: INFO: Got endpoints: latency-svc-njdqd [1.134070232s]
Aug 21 12:58:15.055: INFO: Created: latency-svc-qm4lc
Aug 21 12:58:15.083: INFO: Got endpoints: latency-svc-qm4lc [1.241331223s]
Aug 21 12:58:15.138: INFO: Created: latency-svc-x6bjv
Aug 21 12:58:15.216: INFO: Got endpoints: latency-svc-x6bjv [1.372681102s]
Aug 21 12:58:15.253: INFO: Created: latency-svc-t8xsq
Aug 21 12:58:15.277: INFO: Got endpoints: latency-svc-t8xsq [1.432638455s]
Aug 21 12:58:15.306: INFO: Created: latency-svc-snzbf
Aug 21 12:58:15.372: INFO: Got endpoints: latency-svc-snzbf [1.359972826s]
Aug 21 12:58:15.403: INFO: Created: latency-svc-hg8zf
Aug 21 12:58:15.420: INFO: Got endpoints: latency-svc-hg8zf [1.269095954s]
Aug 21 12:58:15.438: INFO: Created: latency-svc-p4ngs
Aug 21 12:58:15.449: INFO: Got endpoints: latency-svc-p4ngs [1.167174822s]
Aug 21 12:58:15.509: INFO: Created: latency-svc-2lfcq
Aug 21 12:58:15.522: INFO: Got endpoints: latency-svc-2lfcq [1.196511728s]
Aug 21 12:58:15.583: INFO: Created: latency-svc-kmwx9
Aug 21 12:58:15.600: INFO: Got endpoints: latency-svc-kmwx9 [1.097053389s]
Aug 21 12:58:15.710: INFO: Created: latency-svc-w6ch6
Aug 21 12:58:15.764: INFO: Got endpoints: latency-svc-w6ch6 [1.245970356s]
Aug 21 12:58:15.851: INFO: Created: latency-svc-cql77
Aug 21 12:58:15.865: INFO: Got endpoints: latency-svc-cql77 [1.298720315s]
Aug 21 12:58:15.894: INFO: Created: latency-svc-q5g7j
Aug 21 12:58:15.937: INFO: Got endpoints: latency-svc-q5g7j [1.274722192s]
Aug 21 12:58:16.015: INFO: Created: latency-svc-dt5r9
Aug 21 12:58:16.034: INFO: Got endpoints: latency-svc-dt5r9 [1.255526335s]
Aug 21 12:58:16.051: INFO: Created: latency-svc-z69mf
Aug 21 12:58:16.064: INFO: Got endpoints: latency-svc-z69mf [1.21470756s]
Aug 21 12:58:16.081: INFO: Created: latency-svc-kzrj2
Aug 21 12:58:16.096: INFO: Got endpoints: latency-svc-kzrj2 [1.179892244s]
Aug 21 12:58:16.162: INFO: Created: latency-svc-jbq76
Aug 21 12:58:16.189: INFO: Got endpoints: latency-svc-jbq76 [1.212533726s]
Aug 21 12:58:16.348: INFO: Created: latency-svc-rqhmh
Aug 21 12:58:16.396: INFO: Got endpoints: latency-svc-rqhmh [1.312221866s]
Aug 21 12:58:16.594: INFO: Created: latency-svc-nz8kg
Aug 21 12:58:16.635: INFO: Got endpoints: latency-svc-nz8kg [1.418662429s]
Aug 21 12:58:17.145: INFO: Created: latency-svc-mzlf6
Aug 21 12:58:17.192: INFO: Got endpoints: latency-svc-mzlf6 [1.914635339s]
Aug 21 12:58:17.397: INFO: Created: latency-svc-nkzh7
Aug 21 12:58:17.721: INFO: Got endpoints: latency-svc-nkzh7 [2.348759067s]
Aug 21 12:58:17.911: INFO: Created: latency-svc-85h8c
Aug 21 12:58:17.943: INFO: Got endpoints: latency-svc-85h8c [2.522211872s]
Aug 21 12:58:17.972: INFO: Created: latency-svc-l2smw
Aug 21 12:58:18.079: INFO: Got endpoints: latency-svc-l2smw [2.629825342s]
Aug 21 12:58:18.109: INFO: Created: latency-svc-72p8d
Aug 21 12:58:18.134: INFO: Got endpoints: latency-svc-72p8d [2.611424601s]
Aug 21 12:58:18.158: INFO: Created: latency-svc-9j5fk
Aug 21 12:58:18.227: INFO: Got endpoints: latency-svc-9j5fk [2.626826181s]
Aug 21 12:58:18.261: INFO: Created: latency-svc-qqw45
Aug 21 12:58:18.265: INFO: Got endpoints: latency-svc-qqw45 [2.50089721s]
Aug 21 12:58:18.319: INFO: Created: latency-svc-zwfr7
Aug 21 12:58:18.372: INFO: Got endpoints: latency-svc-zwfr7 [2.507553533s]
Aug 21 12:58:18.416: INFO: Created: latency-svc-7tpgb
Aug 21 12:58:18.528: INFO: Got endpoints: latency-svc-7tpgb [2.590537058s]
Aug 21 12:58:18.825: INFO: Created: latency-svc-vl7fp
Aug 21 12:58:18.948: INFO: Got endpoints: latency-svc-vl7fp [2.913841506s]
Aug 21 12:58:19.006: INFO: Created: latency-svc-sjfpf
Aug 21 12:58:19.248: INFO: Got endpoints: latency-svc-sjfpf [3.18428628s]
Aug 21 12:58:19.249: INFO: Created: latency-svc-lm6tz
Aug 21 12:58:19.294: INFO: Got endpoints: latency-svc-lm6tz [3.197797452s]
Aug 21 12:58:19.479: INFO: Created: latency-svc-tnbcq
Aug 21 12:58:19.514: INFO: Got endpoints: latency-svc-tnbcq [3.325350092s]
Aug 21 12:58:19.629: INFO: Created: latency-svc-brwxh
Aug 21 12:58:19.678: INFO: Got endpoints: latency-svc-brwxh [3.281673717s]
Aug 21 12:58:19.760: INFO: Created: latency-svc-dgf8g
Aug 21 12:58:19.791: INFO: Got endpoints: latency-svc-dgf8g [3.156515429s]
Aug 21 12:58:19.827: INFO: Created: latency-svc-6j9nk
Aug 21 12:58:19.839: INFO: Got endpoints: latency-svc-6j9nk [2.646779214s]
Aug 21 12:58:19.941: INFO: Created: latency-svc-tj2hk
Aug 21 12:58:19.984: INFO: Got endpoints: latency-svc-tj2hk [2.262706255s]
Aug 21 12:58:20.165: INFO: Created: latency-svc-w4vfm
Aug 21 12:58:20.205: INFO: Got endpoints: latency-svc-w4vfm [2.262316238s]
Aug 21 12:58:20.612: INFO: Created: latency-svc-9jwmz
Aug 21 12:58:20.708: INFO: Got endpoints: latency-svc-9jwmz [2.628440292s]
Aug 21 12:58:20.843: INFO: Created: latency-svc-fqbvg
Aug 21 12:58:20.860: INFO: Got endpoints: latency-svc-fqbvg [2.725724171s]
Aug 21 12:58:21.124: INFO: Created: latency-svc-6w5m4
Aug 21 12:58:21.235: INFO: Got endpoints: latency-svc-6w5m4 [3.007228796s]
Aug 21 12:58:21.298: INFO: Created: latency-svc-k96fr
Aug 21 12:58:21.377: INFO: Got endpoints: latency-svc-k96fr [3.111975612s]
Aug 21 12:58:21.387: INFO: Created: latency-svc-ht92c
Aug 21 12:58:21.404: INFO: Got endpoints: latency-svc-ht92c [3.031498697s]
Aug 21 12:58:21.456: INFO: Created: latency-svc-kcf7s
Aug 21 12:58:21.544: INFO: Got endpoints: latency-svc-kcf7s [3.016284309s]
Aug 21 12:58:21.586: INFO: Created: latency-svc-2fzvg
Aug 21 12:58:21.608: INFO: Got endpoints: latency-svc-2fzvg [2.659425138s]
Aug 21 12:58:21.688: INFO: Created: latency-svc-mlffk
Aug 21 12:58:21.692: INFO: Got endpoints: latency-svc-mlffk [2.443613655s]
Aug 21 12:58:21.766: INFO: Created: latency-svc-l4xxs
Aug 21 12:58:21.778: INFO: Got endpoints: latency-svc-l4xxs [2.48334365s]
Aug 21 12:58:21.839: INFO: Created: latency-svc-lplpl
Aug 21 12:58:21.843: INFO: Got endpoints: latency-svc-lplpl [2.328817771s]
Aug 21 12:58:21.911: INFO: Created: latency-svc-hrpcg
Aug 21 12:58:21.915: INFO: Got endpoints: latency-svc-hrpcg [2.237372653s]
Aug 21 12:58:21.982: INFO: Created: latency-svc-shjrw
Aug 21 12:58:22.012: INFO: Got endpoints: latency-svc-shjrw [2.220688756s]
Aug 21 12:58:22.072: INFO: Created: latency-svc-xhqbh
Aug 21 12:58:22.108: INFO: Got endpoints: latency-svc-xhqbh [2.268499997s]
Aug 21 12:58:22.174: INFO: Created: latency-svc-x6gtn
Aug 21 12:58:22.193: INFO: Got endpoints: latency-svc-x6gtn [2.208656962s]
Aug 21 12:58:22.259: INFO: Created: latency-svc-ksbnl
Aug 21 12:58:22.276: INFO: Got endpoints: latency-svc-ksbnl [2.07065649s]
Aug 21 12:58:22.306: INFO: Created: latency-svc-q8f4d
Aug 21 12:58:22.320: INFO: Got endpoints: latency-svc-q8f4d [1.61094894s]
Aug 21 12:58:22.336: INFO: Created: latency-svc-m666c
Aug 21 12:58:22.350: INFO: Got endpoints: latency-svc-m666c [1.490328449s]
Aug 21 12:58:22.396: INFO: Created: latency-svc-qp6z4
Aug 21 12:58:22.422: INFO: Got endpoints: latency-svc-qp6z4 [1.186580049s]
Aug 21 12:58:22.422: INFO: Created: latency-svc-d4xqr
Aug 21 12:58:22.480: INFO: Got endpoints: latency-svc-d4xqr [1.102202252s]
Aug 21 12:58:22.606: INFO: Created: latency-svc-tjwwb
Aug 21 12:58:22.620: INFO: Got endpoints: latency-svc-tjwwb [1.215488576s]
Aug 21 12:58:22.642: INFO: Created: latency-svc-txjw6
Aug 21 12:58:22.662: INFO: Got endpoints: latency-svc-txjw6 [1.117534643s]
Aug 21 12:58:22.757: INFO: Created: latency-svc-c4bwt
Aug 21 12:58:23.020: INFO: Got endpoints: latency-svc-c4bwt [1.411702638s]
Aug 21 12:58:23.256: INFO: Created: latency-svc-gvj7c
Aug 21 12:58:23.263: INFO: Got endpoints: latency-svc-gvj7c [1.571060866s]
Aug 21 12:58:23.326: INFO: Created: latency-svc-2jkl7
Aug 21 12:58:23.342: INFO: Got endpoints: latency-svc-2jkl7 [1.563468213s]
Aug 21 12:58:23.369: INFO: Created: latency-svc-72hww
Aug 21 12:58:23.396: INFO: Got endpoints: latency-svc-72hww [1.55202993s]
Aug 21 12:58:23.448: INFO: Created: latency-svc-gwtp8
Aug 21 12:58:23.453: INFO: Got endpoints: latency-svc-gwtp8 [1.538168735s]
Aug 21 12:58:23.483: INFO: Created: latency-svc-hgm77
Aug 21 12:58:23.491: INFO: Got endpoints: latency-svc-hgm77 [1.478880119s]
Aug 21 12:58:23.520: INFO: Created: latency-svc-pvdw8
Aug 21 12:58:23.535: INFO: Got endpoints: latency-svc-pvdw8 [1.42658012s]
Aug 21 12:58:23.587: INFO: Created: latency-svc-qp7dz
Aug 21 12:58:23.604: INFO: Got endpoints: latency-svc-qp7dz [1.40997308s]
Aug 21 12:58:23.653: INFO: Created: latency-svc-8brhj
Aug 21 12:58:23.667: INFO: Got endpoints: latency-svc-8brhj [1.391053351s]
Aug 21 12:58:23.756: INFO: Created: latency-svc-zmlkf
Aug 21 12:58:23.762: INFO: Got endpoints: latency-svc-zmlkf [1.442308896s]
Aug 21 12:58:23.801: INFO: Created: latency-svc-shzc9
Aug 21 12:58:23.818: INFO: Got endpoints: latency-svc-shzc9 [1.467651779s]
Aug 21 12:58:23.838: INFO: Created: latency-svc-smn2w
Aug 21 12:58:23.853: INFO: Got endpoints: latency-svc-smn2w [1.431510276s]
Aug 21 12:58:23.898: INFO: Created: latency-svc-pr2bk
Aug 21 12:58:23.951: INFO: Got endpoints: latency-svc-pr2bk [1.471451706s]
Aug 21 12:58:24.114: INFO: Created: latency-svc-pgkvt
Aug 21 12:58:24.125: INFO: Got endpoints: latency-svc-pgkvt [1.504535713s]
Aug 21 12:58:24.209: INFO: Created: latency-svc-wqq7t
Aug 21 12:58:24.294: INFO: Got endpoints: latency-svc-wqq7t [1.631870901s]
Aug 21 12:58:24.407: INFO: Created: latency-svc-67fcm
Aug 21 12:58:24.416: INFO: Got endpoints: latency-svc-67fcm [1.395843352s]
Aug 21 12:58:24.438: INFO: Created: latency-svc-jq6b6
Aug 21 12:58:24.454: INFO: Got endpoints: latency-svc-jq6b6 [1.19025842s]
Aug 21 12:58:24.485: INFO: Created: latency-svc-2g2xs
Aug 21 12:58:24.551: INFO: Got endpoints: latency-svc-2g2xs [1.208927997s]
Aug 21 12:58:24.563: INFO: Created: latency-svc-r8w2p
Aug 21 12:58:24.598: INFO: Got endpoints: latency-svc-r8w2p [1.202310153s]
Aug 21 12:58:24.878: INFO: Created: latency-svc-hx6lf
Aug 21 12:58:25.189: INFO: Got endpoints: latency-svc-hx6lf [1.735440074s]
Aug 21 12:58:25.475: INFO: Created: latency-svc-9cz4m
Aug 21 12:58:25.513: INFO: Created: latency-svc-fzs6z
Aug 21 12:58:25.513: INFO: Got endpoints: latency-svc-9cz4m [2.021891163s]
Aug 21 12:58:25.544: INFO: Got endpoints: latency-svc-fzs6z [2.008608903s]
Aug 21 12:58:25.831: INFO: Created: latency-svc-gnrnw
Aug 21 12:58:26.088: INFO: Got endpoints: latency-svc-gnrnw [2.483993979s]
Aug 21 12:58:26.090: INFO: Created: latency-svc-sh25v
Aug 21 12:58:26.097: INFO: Got endpoints: latency-svc-sh25v [2.429653642s]
Aug 21 12:58:26.233: INFO: Created: latency-svc-xpnhm
Aug 21 12:58:26.259: INFO: Got endpoints: latency-svc-xpnhm [2.497005601s]
Aug 21 12:58:26.294: INFO: Created: latency-svc-wbwrb
Aug 21 12:58:26.301: INFO: Got endpoints: latency-svc-wbwrb [2.483045965s]
Aug 21 12:58:26.323: INFO: Created: latency-svc-6dqtv
Aug 21 12:58:26.583: INFO: Got endpoints: latency-svc-6dqtv [2.729166466s]
Aug 21 12:58:26.611: INFO: Created: latency-svc-hnwxs
Aug 21 12:58:26.827: INFO: Got endpoints: latency-svc-hnwxs [2.875867584s]
Aug 21 12:58:26.947: INFO: Created: latency-svc-8qptv
Aug 21 12:58:26.986: INFO: Got endpoints: latency-svc-8qptv [2.860936735s]
Aug 21 12:58:27.296: INFO: Created: latency-svc-hq7n6
Aug 21 12:58:27.300: INFO: Got endpoints: latency-svc-hq7n6 [3.006046578s]
Aug 21 12:58:27.485: INFO: Created: latency-svc-ttl62
Aug 21 12:58:27.508: INFO: Got endpoints: latency-svc-ttl62 [3.091794111s]
Aug 21 12:58:27.542: INFO: Created: latency-svc-wrgxc
Aug 21 12:58:27.677: INFO: Got endpoints: latency-svc-wrgxc [3.222603518s]
Aug 21 12:58:27.722: INFO: Created: latency-svc-w5zn8
Aug 21 12:58:27.741: INFO: Got endpoints: latency-svc-w5zn8 [3.189902636s]
Aug 21 12:58:27.764: INFO: Created: latency-svc-ssgv6
Aug 21 12:58:27.867: INFO: Got endpoints: latency-svc-ssgv6 [3.269226106s]
Aug 21 12:58:27.896: INFO: Created: latency-svc-j4vds
Aug 21 12:58:27.928: INFO: Got endpoints: latency-svc-j4vds [2.738655418s]
Aug 21 12:58:28.439: INFO: Created: latency-svc-27q2x
Aug 21 12:58:28.477: INFO: Got endpoints: latency-svc-27q2x [2.963379504s]
Aug 21 12:58:28.498: INFO: Created: latency-svc-t879b
Aug 21 12:58:28.510: INFO: Got endpoints: latency-svc-t879b [2.966161073s]
Aug 21 12:58:28.611: INFO: Created: latency-svc-55924
Aug 21 12:58:28.615: INFO: Got endpoints: latency-svc-55924 [2.527148566s]
Aug 21 12:58:28.660: INFO: Created: latency-svc-jq9gd
Aug 21 12:58:28.692: INFO: Got endpoints: latency-svc-jq9gd [2.594697784s]
Aug 21 12:58:28.810: INFO: Created: latency-svc-xkgmp
Aug 21 12:58:29.061: INFO: Got endpoints: latency-svc-xkgmp [2.801142914s]
Aug 21 12:58:29.116: INFO: Created: latency-svc-gnpjx
Aug 21 12:58:29.142: INFO: Got endpoints: latency-svc-gnpjx [2.841146376s]
Aug 21 12:58:29.379: INFO: Created: latency-svc-rg76f
Aug 21 12:58:29.389: INFO: Got endpoints: latency-svc-rg76f [2.805841124s]
Aug 21 12:58:29.417: INFO: Created: latency-svc-z86x7
Aug 21 12:58:29.431: INFO: Got endpoints: latency-svc-z86x7 [2.603165875s]
Aug 21 12:58:29.874: INFO: Created: latency-svc-kn9lh
Aug 21 12:58:29.879: INFO: Got endpoints: latency-svc-kn9lh [2.89325938s]
Aug 21 12:58:30.064: INFO: Created: latency-svc-sjvxs
Aug 21 12:58:30.288: INFO: Got endpoints: latency-svc-sjvxs [2.98788341s]
Aug 21 12:58:30.290: INFO: Created: latency-svc-n9hcg
Aug 21 12:58:30.318: INFO: Got endpoints: latency-svc-n9hcg [2.809833363s]
Aug 21 12:58:30.431: INFO: Created: latency-svc-rrp82
Aug 21 12:58:30.455: INFO: Got endpoints: latency-svc-rrp82 [2.778019648s]
Aug 21 12:58:30.485: INFO: Created: latency-svc-p8zx2
Aug 21 12:58:30.498: INFO: Got endpoints: latency-svc-p8zx2 [2.757029106s]
Aug 21 12:58:30.522: INFO: Created: latency-svc-t8kws
Aug 21 12:58:30.565: INFO: Got endpoints: latency-svc-t8kws [2.697002749s]
Aug 21 12:58:30.629: INFO: Created: latency-svc-bp9b8
Aug 21 12:58:30.655: INFO: Got endpoints: latency-svc-bp9b8 [2.725882908s]
Aug 21 12:58:30.715: INFO: Created: latency-svc-7wnr4
Aug 21 12:58:30.758: INFO: Got endpoints: latency-svc-7wnr4 [2.280672402s]
Aug 21 12:58:30.792: INFO: Created: latency-svc-pjzgz
Aug 21 12:58:30.806: INFO: Got endpoints: latency-svc-pjzgz [2.29549692s]
Aug 21 12:58:30.869: INFO: Created: latency-svc-dthbx
Aug 21 12:58:30.889: INFO: Got endpoints: latency-svc-dthbx [131.220577ms]
Aug 21 12:58:30.930: INFO: Created: latency-svc-zk2q6
Aug 21 12:58:30.956: INFO: Got endpoints: latency-svc-zk2q6 [2.341026255s]
Aug 21 12:58:31.048: INFO: Created: latency-svc-dblhp
Aug 21 12:58:31.104: INFO: Created: latency-svc-4x2sn
Aug 21 12:58:31.104: INFO: Got endpoints: latency-svc-dblhp [2.41192156s]
Aug 21 12:58:31.134: INFO: Got endpoints: latency-svc-4x2sn [2.072979439s]
Aug 21 12:58:31.216: INFO: Created: latency-svc-l7hjr
Aug 21 12:58:31.221: INFO: Got endpoints: latency-svc-l7hjr [2.078008806s]
Aug 21 12:58:31.257: INFO: Created: latency-svc-hhz8c
Aug 21 12:58:31.269: INFO: Got endpoints: latency-svc-hhz8c [1.88017726s]
Aug 21 12:58:31.308: INFO: Created: latency-svc-nb5lx
Aug 21 12:58:31.348: INFO: Got endpoints: latency-svc-nb5lx [1.917041315s]
Aug 21 12:58:31.363: INFO: Created: latency-svc-grglf
Aug 21 12:58:31.385: INFO: Got endpoints: latency-svc-grglf [1.505479879s]
Aug 21 12:58:31.415: INFO: Created: latency-svc-dmtzc
Aug 21 12:58:31.516: INFO: Got endpoints: latency-svc-dmtzc [1.22732661s]
Aug 21 12:58:31.516: INFO: Created: latency-svc-p2htl
Aug 21 12:58:31.559: INFO: Got endpoints: latency-svc-p2htl [1.240344586s]
Aug 21 12:58:31.664: INFO: Created: latency-svc-wtlsh
Aug 21 12:58:31.669: INFO: Got endpoints: latency-svc-wtlsh [1.213821614s]
Aug 21 12:58:31.692: INFO: Created: latency-svc-85pw6
Aug 21 12:58:31.716: INFO: Got endpoints: latency-svc-85pw6 [1.217223943s]
Aug 21 12:58:31.745: INFO: Created: latency-svc-slwsg
Aug 21 12:58:31.754: INFO: Got endpoints: latency-svc-slwsg [1.188789944s]
Aug 21 12:58:31.823: INFO: Created: latency-svc-hnznj
Aug 21 12:58:31.827: INFO: Got endpoints: latency-svc-hnznj [1.172319686s]
Aug 21 12:58:31.902: INFO: Created: latency-svc-pvl4x
Aug 21 12:58:31.983: INFO: Got endpoints: latency-svc-pvl4x [1.1774458s]
Aug 21 12:58:32.016: INFO: Created: latency-svc-jh2j7
Aug 21 12:58:32.043: INFO: Got endpoints: latency-svc-jh2j7 [1.153699666s]
Aug 21 12:58:32.083: INFO: Created: latency-svc-67xcz
Aug 21 12:58:32.163: INFO: Got endpoints: latency-svc-67xcz [1.206536808s]
Aug 21 12:58:32.165: INFO: Created: latency-svc-d2n2q
Aug 21 12:58:32.189: INFO: Got endpoints: latency-svc-d2n2q [1.085194214s]
Aug 21 12:58:32.213: INFO: Created: latency-svc-wfn82
Aug 21 12:58:32.230: INFO: Got endpoints: latency-svc-wfn82 [1.095593965s]
Aug 21 12:58:32.250: INFO: Created: latency-svc-q46bm
Aug 21 12:58:32.260: INFO: Got endpoints: latency-svc-q46bm [1.03895488s]
Aug 21 12:58:32.341: INFO: Created: latency-svc-6q9sh
Aug 21 12:58:32.375: INFO: Got endpoints: latency-svc-6q9sh [1.105998089s]
Aug 21 12:58:32.419: INFO: Created: latency-svc-8smkj
Aug 21 12:58:32.434: INFO: Got endpoints: latency-svc-8smkj [1.085730108s]
Aug 21 12:58:32.510: INFO: Created: latency-svc-fxsgk
Aug 21 12:58:32.539: INFO: Created: latency-svc-vwwh5
Aug 21 12:58:32.539: INFO: Got endpoints: latency-svc-fxsgk [1.15399304s]
Aug 21 12:58:32.555: INFO: Got endpoints: latency-svc-vwwh5 [1.038673245s]
Aug 21 12:58:32.591: INFO: Created: latency-svc-z78vm
Aug 21 12:58:32.671: INFO: Created: latency-svc-b9c6g
Aug 21 12:58:32.671: INFO: Got endpoints: latency-svc-z78vm [1.112649493s]
Aug 21 12:58:32.688: INFO: Got endpoints: latency-svc-b9c6g [1.018724343s]
Aug 21 12:58:32.719: INFO: Created: latency-svc-slcr7
Aug 21 12:58:32.748: INFO: Got endpoints: latency-svc-slcr7 [1.032055925s]
Aug 21 12:58:32.796: INFO: Created: latency-svc-8m84s
Aug 21 12:58:32.801: INFO: Got endpoints: latency-svc-8m84s [1.047527462s]
Aug 21 12:58:32.826: INFO: Created: latency-svc-rxtns
Aug 21 12:58:32.850: INFO: Got endpoints: latency-svc-rxtns [1.022665857s]
Aug 21 12:58:32.879: INFO: Created: latency-svc-c9hs9
Aug 21 12:58:32.937: INFO: Got endpoints: latency-svc-c9hs9 [953.998681ms]
Aug 21 12:58:32.945: INFO: Created: latency-svc-n4krs
Aug 21 12:58:32.966: INFO: Got endpoints: latency-svc-n4krs [922.353444ms]
Aug 21 12:58:32.987: INFO: Created: latency-svc-m7mhs
Aug 21 12:58:33.002: INFO: Got endpoints: latency-svc-m7mhs [838.109271ms]
Aug 21 12:58:33.085: INFO: Created: latency-svc-8mwnl
Aug 21 12:58:33.092: INFO: Got endpoints: latency-svc-8mwnl [902.03373ms]
Aug 21 12:58:33.126: INFO: Created: latency-svc-vxnp6
Aug 21 12:58:33.140: INFO: Got endpoints: latency-svc-vxnp6 [910.659876ms]
Aug 21 12:58:33.162: INFO: Created: latency-svc-5f9bl
Aug 21 12:58:33.228: INFO: Got endpoints: latency-svc-5f9bl [967.815764ms]
Aug 21 12:58:33.264: INFO: Created: latency-svc-95856
Aug 21 12:58:33.277: INFO: Got endpoints: latency-svc-95856 [901.065793ms]
Aug 21 12:58:33.300: INFO: Created: latency-svc-vjlx6
Aug 21 12:58:33.312: INFO: Got endpoints: latency-svc-vjlx6 [878.085959ms]
Aug 21 12:58:33.371: INFO: Created: latency-svc-k62pk
Aug 21 12:58:33.379: INFO: Got endpoints: latency-svc-k62pk [839.98333ms]
Aug 21 12:58:33.403: INFO: Created: latency-svc-cmgxh
Aug 21 12:58:33.416: INFO: Got endpoints: latency-svc-cmgxh [860.60171ms]
Aug 21 12:58:33.438: INFO: Created: latency-svc-qdjhm
Aug 21 12:58:33.463: INFO: Got endpoints: latency-svc-qdjhm [791.191323ms]
Aug 21 12:58:33.521: INFO: Created: latency-svc-67jdw
Aug 21 12:58:33.530: INFO: Got endpoints: latency-svc-67jdw [842.324182ms]
Aug 21 12:58:33.552: INFO: Created: latency-svc-vfzsq
Aug 21 12:58:33.566: INFO: Got endpoints: latency-svc-vfzsq [817.629946ms]
Aug 21 12:58:33.582: INFO: Created: latency-svc-kdsc8
Aug 21 12:58:33.597: INFO: Got endpoints: latency-svc-kdsc8 [795.811023ms]
Aug 21 12:58:33.618: INFO: Created: latency-svc-46hsx
Aug 21 12:58:33.658: INFO: Got endpoints: latency-svc-46hsx [808.21841ms]
Aug 21 12:58:33.678: INFO: Created: latency-svc-w42bg
Aug 21 12:58:33.714: INFO: Got endpoints: latency-svc-w42bg [776.658782ms]
Aug 21 12:58:33.751: INFO: Created: latency-svc-lz5wp
Aug 21 12:58:33.760: INFO: Got endpoints: latency-svc-lz5wp [793.594499ms]
Aug 21 12:58:33.820: INFO: Created: latency-svc-6fnks
Aug 21 12:58:33.825: INFO: Got endpoints: latency-svc-6fnks [822.935961ms]
Aug 21 12:58:33.847: INFO: Created: latency-svc-frfcp
Aug 21 12:58:33.862: INFO: Got endpoints: latency-svc-frfcp [770.19809ms]
Aug 21 12:58:33.908: INFO: Created: latency-svc-vnfcd
Aug 21 12:58:34.061: INFO: Got endpoints: latency-svc-vnfcd [920.086449ms]
Aug 21 12:58:34.064: INFO: Created: latency-svc-xcxdr
Aug 21 12:58:34.154: INFO: Got endpoints: latency-svc-xcxdr [926.023785ms]
Aug 21 12:58:34.228: INFO: Created: latency-svc-r9blh
Aug 21 12:58:34.254: INFO: Got endpoints: latency-svc-r9blh [977.288728ms]
Aug 21 12:58:34.308: INFO: Created: latency-svc-knhtr
Aug 21 12:58:34.384: INFO: Got endpoints: latency-svc-knhtr [1.071338411s]
Aug 21 12:58:34.423: INFO: Created: latency-svc-8hw9j
Aug 21 12:58:34.482: INFO: Got endpoints: latency-svc-8hw9j [1.103057084s]
Aug 21 12:58:34.557: INFO: Created: latency-svc-8w2bf
Aug 21 12:58:34.571: INFO: Got endpoints: latency-svc-8w2bf [1.155208259s]
Aug 21 12:58:34.609: INFO: Created: latency-svc-xvspw
Aug 21 12:58:34.657: INFO: Got endpoints: latency-svc-xvspw [1.194390337s]
Aug 21 12:58:34.719: INFO: Created: latency-svc-dxqs4
Aug 21 12:58:34.741: INFO: Got endpoints: latency-svc-dxqs4 [1.210809165s]
Aug 21 12:58:34.771: INFO: Created: latency-svc-wh2hz
Aug 21 12:58:34.794: INFO: Got endpoints: latency-svc-wh2hz [1.228646687s]
Aug 21 12:58:34.813: INFO: Created: latency-svc-f5htj
Aug 21 12:58:34.874: INFO: Got endpoints: latency-svc-f5htj [1.276811911s]
Aug 21 12:58:34.877: INFO: Created: latency-svc-nkghb
Aug 21 12:58:34.899: INFO: Got endpoints: latency-svc-nkghb [1.239983942s]
Aug 21 12:58:34.922: INFO: Created: latency-svc-4fpvg
Aug 21 12:58:34.933: INFO: Got endpoints: latency-svc-4fpvg [1.218527107s]
Aug 21 12:58:34.951: INFO: Created: latency-svc-dbrkn
Aug 21 12:58:34.965: INFO: Got endpoints: latency-svc-dbrkn [1.205349902s]
Aug 21 12:58:35.031: INFO: Created: latency-svc-9vr2v
Aug 21 12:58:35.041: INFO: Got endpoints: latency-svc-9vr2v [1.216069591s]
Aug 21 12:58:35.065: INFO: Created: latency-svc-flv4j
Aug 21 12:58:35.079: INFO: Got endpoints: latency-svc-flv4j [1.216911159s]
Aug 21 12:58:35.101: INFO: Created: latency-svc-v54x5
Aug 21 12:58:35.117: INFO: Got endpoints: latency-svc-v54x5 [1.056544439s]
Aug 21 12:58:35.174: INFO: Created: latency-svc-gwhjt
Aug 21 12:58:35.198: INFO: Got endpoints: latency-svc-gwhjt [1.0441635s]
Aug 21 12:58:35.240: INFO: Created: latency-svc-bbxwz
Aug 21 12:58:35.255: INFO: Got endpoints: latency-svc-bbxwz [1.000961942s]
Aug 21 12:58:35.335: INFO: Created: latency-svc-pfsfj
Aug 21 12:58:35.344: INFO: Got endpoints: latency-svc-pfsfj [959.998249ms]
Aug 21 12:58:35.413: INFO: Created: latency-svc-h4mdk
Aug 21 12:58:35.430: INFO: Got endpoints: latency-svc-h4mdk [946.927006ms]
Aug 21 12:58:35.491: INFO: Created: latency-svc-4sxwg
Aug 21 12:58:35.496: INFO: Got endpoints: latency-svc-4sxwg [924.658989ms]
Aug 21 12:58:35.533: INFO: Created: latency-svc-zt5hd
Aug 21 12:58:35.571: INFO: Got endpoints: latency-svc-zt5hd [913.80305ms]
Aug 21 12:58:35.635: INFO: Created: latency-svc-tdf84
Aug 21 12:58:35.651: INFO: Got endpoints: latency-svc-tdf84 [909.977465ms]
Aug 21 12:58:35.702: INFO: Created: latency-svc-wjths
Aug 21 12:58:35.718: INFO: Got endpoints: latency-svc-wjths [923.533236ms]
Aug 21 12:58:35.773: INFO: Created: latency-svc-ntqrc
Aug 21 12:58:35.798: INFO: Created: latency-svc-gknnl
Aug 21 12:58:35.800: INFO: Got endpoints: latency-svc-ntqrc [925.139156ms]
Aug 21 12:58:35.823: INFO: Got endpoints: latency-svc-gknnl [923.987869ms]
Aug 21 12:58:35.852: INFO: Created: latency-svc-j4qf5
Aug 21 12:58:35.940: INFO: Got endpoints: latency-svc-j4qf5 [1.006760593s]
Aug 21 12:58:35.954: INFO: Created: latency-svc-qc5cj
Aug 21 12:58:35.978: INFO: Got endpoints: latency-svc-qc5cj [1.012390764s]
Aug 21 12:58:36.008: INFO: Created: latency-svc-xmlrd
Aug 21 12:58:36.032: INFO: Got endpoints: latency-svc-xmlrd [991.078662ms]
Aug 21 12:58:36.105: INFO: Created: latency-svc-kzw5s
Aug 21 12:58:36.170: INFO: Got endpoints: latency-svc-kzw5s [1.090972016s]
Aug 21 12:58:36.305: INFO: Created: latency-svc-zhqkt
Aug 21 12:58:36.345: INFO: Got endpoints: latency-svc-zhqkt [1.227008231s]
Aug 21 12:58:36.503: INFO: Created: latency-svc-gx82v
Aug 21 12:58:36.510: INFO: Got endpoints: latency-svc-gx82v [1.311463569s]
Aug 21 12:58:36.512: INFO: Latencies: [131.220577ms 170.161391ms 309.016746ms 440.536764ms 483.776491ms 661.586154ms 673.770375ms 724.160105ms 770.19809ms 776.658782ms 791.191323ms 793.594499ms 795.811023ms 808.21841ms 817.629946ms 819.836351ms 822.935961ms 838.109271ms 839.98333ms 842.324182ms 860.60171ms 878.085959ms 901.065793ms 902.03373ms 909.977465ms 910.659876ms 913.80305ms 920.086449ms 922.353444ms 923.533236ms 923.987869ms 924.658989ms 925.139156ms 926.023785ms 936.319219ms 946.927006ms 953.998681ms 959.998249ms 967.815764ms 977.288728ms 991.078662ms 1.000961942s 1.006760593s 1.007119776s 1.012390764s 1.018724343s 1.022665857s 1.032055925s 1.038673245s 1.03895488s 1.0441635s 1.047527462s 1.056544439s 1.071338411s 1.073425632s 1.085194214s 1.085730108s 1.090972016s 1.095593965s 1.097053389s 1.102202252s 1.103057084s 1.105998089s 1.112649493s 1.117534643s 1.134070232s 1.153699666s 1.15399304s 1.155208259s 1.167174822s 1.172319686s 1.1774458s 1.179892244s 1.186580049s 1.188789944s 1.19025842s 1.194390337s 1.196511728s 1.202310153s 1.205349902s 1.206536808s 1.208927997s 1.210809165s 1.212533726s 1.213821614s 1.21470756s 1.215488576s 1.216069591s 1.216911159s 1.217223943s 1.218527107s 1.227008231s 1.22732661s 1.228646687s 1.239983942s 1.240344586s 1.241331223s 1.245970356s 1.255526335s 1.269095954s 1.274722192s 1.276811911s 1.298720315s 1.311463569s 1.312221866s 1.359972826s 1.372681102s 1.391053351s 1.395843352s 1.40997308s 1.411702638s 1.418662429s 1.42658012s 1.431510276s 1.432638455s 1.442308896s 1.467651779s 1.471451706s 1.478880119s 1.490328449s 1.504535713s 1.505479879s 1.538168735s 1.55202993s 1.563468213s 1.571060866s 1.61094894s 1.631870901s 1.735440074s 1.88017726s 1.914635339s 1.917041315s 2.008608903s 2.021891163s 2.07065649s 2.072979439s 2.078008806s 2.208656962s 2.220688756s 2.237372653s 2.262316238s 2.262706255s 2.268499997s 2.280672402s 2.29549692s 2.328817771s 2.341026255s 2.348759067s 2.41192156s 2.429653642s 2.443613655s 2.483045965s 2.48334365s 2.483993979s 2.497005601s 2.50089721s 2.507553533s 2.522211872s 2.527148566s 2.590537058s 2.594697784s 2.603165875s 2.611424601s 2.626826181s 2.628440292s 2.629825342s 2.646779214s 2.659425138s 2.697002749s 2.725724171s 2.725882908s 2.729166466s 2.738655418s 2.757029106s 2.778019648s 2.801142914s 2.805841124s 2.809833363s 2.841146376s 2.860936735s 2.875867584s 2.89325938s 2.913841506s 2.963379504s 2.966161073s 2.98788341s 3.006046578s 3.007228796s 3.016284309s 3.031498697s 3.091794111s 3.111975612s 3.156515429s 3.18428628s 3.189902636s 3.197797452s 3.222603518s 3.269226106s 3.281673717s 3.325350092s]
Aug 21 12:58:36.514: INFO: 50 %ile: 1.274722192s
Aug 21 12:58:36.514: INFO: 90 %ile: 2.875867584s
Aug 21 12:58:36.514: INFO: 99 %ile: 3.281673717s
Aug 21 12:58:36.514: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:58:36.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9154" for this suite.

• [SLOW TEST:27.149 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":185,"skipped":3197,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:58:36.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-e361af97-f516-4ccb-88a3-afae59f8bee7
STEP: Creating configMap with name cm-test-opt-upd-327bdd75-9fd4-4759-8448-01d1545226d1
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e361af97-f516-4ccb-88a3-afae59f8bee7
STEP: Updating configmap cm-test-opt-upd-327bdd75-9fd4-4759-8448-01d1545226d1
STEP: Creating configMap with name cm-test-opt-create-022d95ff-e810-4693-911d-1d836df751c8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:59:46.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6480" for this suite.

• [SLOW TEST:69.713 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3198,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:59:46.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Aug 21 12:59:46.402: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix576060097/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 12:59:47.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4514" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":187,"skipped":3203,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 12:59:47.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Aug 21 12:59:47.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3903'
Aug 21 12:59:49.476: INFO: stderr: ""
Aug 21 12:59:49.477: INFO: stdout: "pod/pause created\n"
Aug 21 12:59:49.477: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 21 12:59:49.477: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3903" to be "running and ready"
Aug 21 12:59:49.627: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 149.758589ms
Aug 21 12:59:51.673: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195796543s
Aug 21 12:59:53.681: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.203938716s
Aug 21 12:59:53.681: INFO: Pod "pause" satisfied condition "running and ready"
Aug 21 12:59:53.681: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 21 12:59:53.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3903'
Aug 21 12:59:54.958: INFO: stderr: ""
Aug 21 12:59:54.958: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 21 12:59:54.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3903'
Aug 21 12:59:56.224: INFO: stderr: ""
Aug 21 12:59:56.225: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 21 12:59:56.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3903'
Aug 21 12:59:57.466: INFO: stderr: ""
Aug 21 12:59:57.466: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 21 12:59:57.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3903'
Aug 21 12:59:58.728: INFO: stderr: ""
Aug 21 12:59:58.729: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Aug 21 12:59:58.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3903'
Aug 21 13:00:00.048: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 13:00:00.048: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 21 13:00:00.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3903'
Aug 21 13:00:01.304: INFO: stderr: "No resources found in kubectl-3903 namespace.\n"
Aug 21 13:00:01.304: INFO: stdout: ""
Aug 21 13:00:01.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3903 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 13:00:02.544: INFO: stderr: ""
Aug 21 13:00:02.545: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:00:02.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3903" for this suite.

• [SLOW TEST:15.114 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":188,"skipped":3236,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:00:02.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 21 13:00:02.672: INFO: Waiting up to 5m0s for pod "pod-72a5fea9-bef3-40fe-baf2-14e1a0e7032a" in namespace "emptydir-9600" to be "Succeeded or Failed"
Aug 21 13:00:02.685: INFO: Pod "pod-72a5fea9-bef3-40fe-baf2-14e1a0e7032a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.70493ms
Aug 21 13:00:04.693: INFO: Pod "pod-72a5fea9-bef3-40fe-baf2-14e1a0e7032a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020671143s
Aug 21 13:00:06.701: INFO: Pod "pod-72a5fea9-bef3-40fe-baf2-14e1a0e7032a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028348025s
STEP: Saw pod success
Aug 21 13:00:06.701: INFO: Pod "pod-72a5fea9-bef3-40fe-baf2-14e1a0e7032a" satisfied condition "Succeeded or Failed"
Aug 21 13:00:06.706: INFO: Trying to get logs from node kali-worker pod pod-72a5fea9-bef3-40fe-baf2-14e1a0e7032a container test-container: 
STEP: delete the pod
Aug 21 13:00:06.781: INFO: Waiting for pod pod-72a5fea9-bef3-40fe-baf2-14e1a0e7032a to disappear
Aug 21 13:00:06.822: INFO: Pod pod-72a5fea9-bef3-40fe-baf2-14e1a0e7032a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:00:06.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9600" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3241,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:00:06.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 13:00:06.944: INFO: Waiting up to 5m0s for pod "busybox-user-65534-31067055-9ca2-43a3-b54f-9387a1f25f24" in namespace "security-context-test-6667" to be "Succeeded or Failed"
Aug 21 13:00:06.955: INFO: Pod "busybox-user-65534-31067055-9ca2-43a3-b54f-9387a1f25f24": Phase="Pending", Reason="", readiness=false. Elapsed: 10.630377ms
Aug 21 13:00:08.961: INFO: Pod "busybox-user-65534-31067055-9ca2-43a3-b54f-9387a1f25f24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01646276s
Aug 21 13:00:10.968: INFO: Pod "busybox-user-65534-31067055-9ca2-43a3-b54f-9387a1f25f24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023494205s
Aug 21 13:00:10.968: INFO: Pod "busybox-user-65534-31067055-9ca2-43a3-b54f-9387a1f25f24" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:00:10.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6667" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3242,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:00:10.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 13:00:11.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b607b88c-09f4-434a-9ff0-d6e40231e065" in namespace "downward-api-4737" to be "Succeeded or Failed"
Aug 21 13:00:11.118: INFO: Pod "downwardapi-volume-b607b88c-09f4-434a-9ff0-d6e40231e065": Phase="Pending", Reason="", readiness=false. Elapsed: 29.377917ms
Aug 21 13:00:13.278: INFO: Pod "downwardapi-volume-b607b88c-09f4-434a-9ff0-d6e40231e065": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189866885s
Aug 21 13:00:15.286: INFO: Pod "downwardapi-volume-b607b88c-09f4-434a-9ff0-d6e40231e065": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197655579s
Aug 21 13:00:17.295: INFO: Pod "downwardapi-volume-b607b88c-09f4-434a-9ff0-d6e40231e065": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206743235s
STEP: Saw pod success
Aug 21 13:00:17.296: INFO: Pod "downwardapi-volume-b607b88c-09f4-434a-9ff0-d6e40231e065" satisfied condition "Succeeded or Failed"
Aug 21 13:00:17.301: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-b607b88c-09f4-434a-9ff0-d6e40231e065 container client-container: 
STEP: delete the pod
Aug 21 13:00:17.350: INFO: Waiting for pod downwardapi-volume-b607b88c-09f4-434a-9ff0-d6e40231e065 to disappear
Aug 21 13:00:17.364: INFO: Pod downwardapi-volume-b607b88c-09f4-434a-9ff0-d6e40231e065 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:00:17.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4737" for this suite.

• [SLOW TEST:6.395 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3242,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:00:17.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 21 13:00:17.470: INFO: PodSpec: initContainers in spec.initContainers
Aug 21 13:01:09.012: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1bc079c2-2e8e-4465-a602-a393509e701e", GenerateName:"", Namespace:"init-container-2389", SelfLink:"/api/v1/namespaces/init-container-2389/pods/pod-init-1bc079c2-2e8e-4465-a602-a393509e701e", UID:"5ff3aa24-586f-4dcc-a103-d8e25df05408", ResourceVersion:"2127659", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733611617, loc:(*time.Location)(0x74b2e20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"469302012"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x4006302860), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40063028a0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x40063028e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4006302900)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7qvrc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x400519d200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7qvrc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7qvrc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7qvrc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400372bb08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4001038310), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x400372bb90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x400372bbb0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x400372bbb8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x400372bbbc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611617, loc:(*time.Location)(0x74b2e20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611617, loc:(*time.Location)(0x74b2e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611617, loc:(*time.Location)(0x74b2e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733611617, loc:(*time.Location)(0x74b2e20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.1.215", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.215"}}, StartTime:(*v1.Time)(0x4006302920), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x4006302960), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40010383f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://09404437bba48ac896964c682ff631530ff8b0095eb14607f173c7c19f9da81f", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4006302980), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4006302940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x400372bc3f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:01:09.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2389" for this suite.

• [SLOW TEST:51.758 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":192,"skipped":3262,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:01:09.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9159.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9159.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9159.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9159.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 180.69.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.69.180_udp@PTR;check="$$(dig +tcp +noall +answer +search 180.69.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.69.180_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9159.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9159.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9159.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9159.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9159.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9159.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9159.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 180.69.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.69.180_udp@PTR;check="$$(dig +tcp +noall +answer +search 180.69.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.69.180_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 13:01:19.526: INFO: Unable to read wheezy_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:19.533: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:19.536: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:19.539: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:19.564: INFO: Unable to read jessie_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:19.568: INFO: Unable to read jessie_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:19.572: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:19.577: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:19.670: INFO: Lookups using dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a failed for: [wheezy_udp@dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_udp@dns-test-service.dns-9159.svc.cluster.local jessie_tcp@dns-test-service.dns-9159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local]

Aug 21 13:01:24.698: INFO: Unable to read wheezy_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:24.704: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:24.709: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:24.713: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:24.741: INFO: Unable to read jessie_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:24.745: INFO: Unable to read jessie_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:24.749: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:24.752: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:24.776: INFO: Lookups using dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a failed for: [wheezy_udp@dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_udp@dns-test-service.dns-9159.svc.cluster.local jessie_tcp@dns-test-service.dns-9159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local]

Aug 21 13:01:29.681: INFO: Unable to read wheezy_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:29.686: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:29.691: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:29.694: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:29.720: INFO: Unable to read jessie_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:29.723: INFO: Unable to read jessie_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:29.726: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:29.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:29.749: INFO: Lookups using dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a failed for: [wheezy_udp@dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_udp@dns-test-service.dns-9159.svc.cluster.local jessie_tcp@dns-test-service.dns-9159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local]

Aug 21 13:01:34.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:34.687: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:34.691: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:34.695: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:34.765: INFO: Unable to read jessie_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:34.770: INFO: Unable to read jessie_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:34.817: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:34.824: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:34.954: INFO: Lookups using dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a failed for: [wheezy_udp@dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_udp@dns-test-service.dns-9159.svc.cluster.local jessie_tcp@dns-test-service.dns-9159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local]

Aug 21 13:01:39.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:39.683: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:39.687: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:39.691: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:39.738: INFO: Unable to read jessie_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:39.742: INFO: Unable to read jessie_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:39.746: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:39.751: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:39.777: INFO: Lookups using dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a failed for: [wheezy_udp@dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_udp@dns-test-service.dns-9159.svc.cluster.local jessie_tcp@dns-test-service.dns-9159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local]

Aug 21 13:01:44.676: INFO: Unable to read wheezy_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:44.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:44.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:44.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:44.720: INFO: Unable to read jessie_udp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:44.724: INFO: Unable to read jessie_tcp@dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:44.727: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:44.729: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local from pod dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a: the server could not find the requested resource (get pods dns-test-47515710-5df8-495e-a4b7-b258859f262a)
Aug 21 13:01:44.748: INFO: Lookups using dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a failed for: [wheezy_udp@dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@dns-test-service.dns-9159.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_udp@dns-test-service.dns-9159.svc.cluster.local jessie_tcp@dns-test-service.dns-9159.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9159.svc.cluster.local]

Aug 21 13:01:49.760: INFO: DNS probes using dns-9159/dns-test-47515710-5df8-495e-a4b7-b258859f262a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:01:50.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9159" for this suite.

• [SLOW TEST:41.417 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":193,"skipped":3278,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:01:50.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 13:01:50.656: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:01:51.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4887" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":194,"skipped":3302,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:01:51.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 13:01:51.798: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b399bbf3-26d0-4749-add7-2fd8b8b7407a", Controller:(*bool)(0x400372b026), BlockOwnerDeletion:(*bool)(0x400372b027)}}
Aug 21 13:01:51.825: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a6f57a14-e83b-44cf-a87f-8b9c18feb076", Controller:(*bool)(0x40036d6766), BlockOwnerDeletion:(*bool)(0x40036d6767)}}
Aug 21 13:01:51.872: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7c7c69ef-1db1-4667-b123-c6d82c6b4c0c", Controller:(*bool)(0x400372b216), BlockOwnerDeletion:(*bool)(0x400372b217)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:01:57.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-917" for this suite.

• [SLOW TEST:5.851 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":195,"skipped":3330,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:01:57.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0821 13:02:10.225289      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 13:02:10.225: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:02:10.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5502" for this suite.

• [SLOW TEST:13.159 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":196,"skipped":3330,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:02:10.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0821 13:02:51.242605      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 13:02:51.242: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:02:51.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9538" for this suite.

• [SLOW TEST:40.853 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":197,"skipped":3343,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:02:51.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 13:02:57.219: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:02:57.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7264" for this suite.

• [SLOW TEST:6.259 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3359,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:02:57.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Aug 21 13:02:58.633: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:02:59.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3073" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":199,"skipped":3380,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:03:00.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 13:03:02.326: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba" in namespace "projected-7533" to be "Succeeded or Failed"
Aug 21 13:03:02.668: INFO: Pod "downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba": Phase="Pending", Reason="", readiness=false. Elapsed: 342.054639ms
Aug 21 13:03:05.004: INFO: Pod "downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.677628738s
Aug 21 13:03:07.119: INFO: Pod "downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.792368879s
Aug 21 13:03:09.429: INFO: Pod "downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba": Phase="Pending", Reason="", readiness=false. Elapsed: 7.102438046s
Aug 21 13:03:11.436: INFO: Pod "downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.109628406s
STEP: Saw pod success
Aug 21 13:03:11.436: INFO: Pod "downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba" satisfied condition "Succeeded or Failed"
Aug 21 13:03:11.442: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba container client-container: 
STEP: delete the pod
Aug 21 13:03:11.716: INFO: Waiting for pod downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba to disappear
Aug 21 13:03:12.009: INFO: Pod downwardapi-volume-6a0c50f1-fb1d-4a3d-8525-d160287f24ba no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:03:12.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7533" for this suite.

• [SLOW TEST:11.322 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3391,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:03:12.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug 21 13:03:12.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8836'
Aug 21 13:03:16.433: INFO: stderr: ""
Aug 21 13:03:16.433: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 13:03:16.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8836'
Aug 21 13:03:17.711: INFO: stderr: ""
Aug 21 13:03:17.711: INFO: stdout: "update-demo-nautilus-9j52k update-demo-nautilus-jdcnq "
Aug 21 13:03:17.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j52k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:18.961: INFO: stderr: ""
Aug 21 13:03:18.961: INFO: stdout: ""
Aug 21 13:03:18.961: INFO: update-demo-nautilus-9j52k is created but not running
Aug 21 13:03:23.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8836'
Aug 21 13:03:25.261: INFO: stderr: ""
Aug 21 13:03:25.262: INFO: stdout: "update-demo-nautilus-9j52k update-demo-nautilus-jdcnq "
Aug 21 13:03:25.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j52k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:26.560: INFO: stderr: ""
Aug 21 13:03:26.560: INFO: stdout: "true"
Aug 21 13:03:26.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j52k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:27.830: INFO: stderr: ""
Aug 21 13:03:27.830: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 13:03:27.830: INFO: validating pod update-demo-nautilus-9j52k
Aug 21 13:03:27.838: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 13:03:27.838: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 13:03:27.838: INFO: update-demo-nautilus-9j52k is verified up and running
Aug 21 13:03:27.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jdcnq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:29.116: INFO: stderr: ""
Aug 21 13:03:29.117: INFO: stdout: "true"
Aug 21 13:03:29.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jdcnq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:30.409: INFO: stderr: ""
Aug 21 13:03:30.409: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 13:03:30.409: INFO: validating pod update-demo-nautilus-jdcnq
Aug 21 13:03:30.415: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 13:03:30.415: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 13:03:30.415: INFO: update-demo-nautilus-jdcnq is verified up and running
STEP: scaling down the replication controller
Aug 21 13:03:30.426: INFO: scanned /root for discovery docs: 
Aug 21 13:03:30.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8836'
Aug 21 13:03:31.741: INFO: stderr: ""
Aug 21 13:03:31.741: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 13:03:31.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8836'
Aug 21 13:03:33.222: INFO: stderr: ""
Aug 21 13:03:33.222: INFO: stdout: "update-demo-nautilus-9j52k update-demo-nautilus-jdcnq "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 21 13:03:38.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8836'
Aug 21 13:03:39.478: INFO: stderr: ""
Aug 21 13:03:39.478: INFO: stdout: "update-demo-nautilus-9j52k "
Aug 21 13:03:39.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j52k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:40.762: INFO: stderr: ""
Aug 21 13:03:40.762: INFO: stdout: "true"
Aug 21 13:03:40.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j52k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:42.081: INFO: stderr: ""
Aug 21 13:03:42.082: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 13:03:42.082: INFO: validating pod update-demo-nautilus-9j52k
Aug 21 13:03:42.209: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 13:03:42.209: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 13:03:42.209: INFO: update-demo-nautilus-9j52k is verified up and running
STEP: scaling up the replication controller
Aug 21 13:03:42.219: INFO: scanned /root for discovery docs: 
Aug 21 13:03:42.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8836'
Aug 21 13:03:43.493: INFO: stderr: ""
Aug 21 13:03:43.493: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 13:03:43.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8836'
Aug 21 13:03:44.763: INFO: stderr: ""
Aug 21 13:03:44.764: INFO: stdout: "update-demo-nautilus-2vhqw update-demo-nautilus-9j52k "
Aug 21 13:03:44.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vhqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:46.007: INFO: stderr: ""
Aug 21 13:03:46.007: INFO: stdout: ""
Aug 21 13:03:46.007: INFO: update-demo-nautilus-2vhqw is created but not running
Aug 21 13:03:51.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8836'
Aug 21 13:03:52.333: INFO: stderr: ""
Aug 21 13:03:52.333: INFO: stdout: "update-demo-nautilus-2vhqw update-demo-nautilus-9j52k "
Aug 21 13:03:52.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vhqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:53.591: INFO: stderr: ""
Aug 21 13:03:53.592: INFO: stdout: "true"
Aug 21 13:03:53.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vhqw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:54.859: INFO: stderr: ""
Aug 21 13:03:54.859: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 13:03:54.859: INFO: validating pod update-demo-nautilus-2vhqw
Aug 21 13:03:54.865: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 13:03:54.865: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 13:03:54.865: INFO: update-demo-nautilus-2vhqw is verified up and running
Aug 21 13:03:54.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j52k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:56.114: INFO: stderr: ""
Aug 21 13:03:56.114: INFO: stdout: "true"
Aug 21 13:03:56.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9j52k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8836'
Aug 21 13:03:57.410: INFO: stderr: ""
Aug 21 13:03:57.410: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 21 13:03:57.410: INFO: validating pod update-demo-nautilus-9j52k
Aug 21 13:03:57.416: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 13:03:57.416: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 13:03:57.416: INFO: update-demo-nautilus-9j52k is verified up and running
STEP: using delete to clean up resources
Aug 21 13:03:57.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8836'
Aug 21 13:03:58.596: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 13:03:58.597: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 21 13:03:58.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8836'
Aug 21 13:03:59.877: INFO: stderr: "No resources found in kubectl-8836 namespace.\n"
Aug 21 13:03:59.877: INFO: stdout: ""
Aug 21 13:03:59.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8836 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 13:04:01.139: INFO: stderr: ""
Aug 21 13:04:01.139: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:04:01.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8836" for this suite.

• [SLOW TEST:49.130 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":201,"skipped":3393,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:04:01.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 21 13:04:01.289: INFO: Created pod &Pod{ObjectMeta:{dns-2897  dns-2897 /api/v1/namespaces/dns-2897/pods/dns-2897 f7b3e879-bfa3-47f7-8618-8cd9a6e93217 2128766 0 2020-08-21 13:04:01 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-08-21 13:04:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-92qxh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-92qxh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-92qxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:04:01.308: INFO: The status of Pod dns-2897 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 13:04:03.316: INFO: The status of Pod dns-2897 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 13:04:05.315: INFO: The status of Pod dns-2897 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 21 13:04:05.316: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2897 PodName:dns-2897 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 13:04:05.316: INFO: >>> kubeConfig: /root/.kube/config
I0821 13:04:05.379546      10 log.go:172] (0x4002170630) (0x400176fa40) Create stream
I0821 13:04:05.379729      10 log.go:172] (0x4002170630) (0x400176fa40) Stream added, broadcasting: 1
I0821 13:04:05.384407      10 log.go:172] (0x4002170630) Reply frame received for 1
I0821 13:04:05.384648      10 log.go:172] (0x4002170630) (0x400176fae0) Create stream
I0821 13:04:05.384883      10 log.go:172] (0x4002170630) (0x400176fae0) Stream added, broadcasting: 3
I0821 13:04:05.386748      10 log.go:172] (0x4002170630) Reply frame received for 3
I0821 13:04:05.386950      10 log.go:172] (0x4002170630) (0x4002f81040) Create stream
I0821 13:04:05.387053      10 log.go:172] (0x4002170630) (0x4002f81040) Stream added, broadcasting: 5
I0821 13:04:05.389094      10 log.go:172] (0x4002170630) Reply frame received for 5
I0821 13:04:05.476243      10 log.go:172] (0x4002170630) Data frame received for 3
I0821 13:04:05.476383      10 log.go:172] (0x400176fae0) (3) Data frame handling
I0821 13:04:05.476501      10 log.go:172] (0x400176fae0) (3) Data frame sent
I0821 13:04:05.478940      10 log.go:172] (0x4002170630) Data frame received for 3
I0821 13:04:05.479103      10 log.go:172] (0x400176fae0) (3) Data frame handling
I0821 13:04:05.479212      10 log.go:172] (0x4002170630) Data frame received for 5
I0821 13:04:05.479325      10 log.go:172] (0x4002f81040) (5) Data frame handling
I0821 13:04:05.481132      10 log.go:172] (0x4002170630) Data frame received for 1
I0821 13:04:05.481228      10 log.go:172] (0x400176fa40) (1) Data frame handling
I0821 13:04:05.481326      10 log.go:172] (0x400176fa40) (1) Data frame sent
I0821 13:04:05.481436      10 log.go:172] (0x4002170630) (0x400176fa40) Stream removed, broadcasting: 1
I0821 13:04:05.481564      10 log.go:172] (0x4002170630) Go away received
I0821 13:04:05.481911      10 log.go:172] (0x4002170630) (0x400176fa40) Stream removed, broadcasting: 1
I0821 13:04:05.482061      10 log.go:172] (0x4002170630) (0x400176fae0) Stream removed, broadcasting: 3
I0821 13:04:05.482206      10 log.go:172] (0x4002170630) (0x4002f81040) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 21 13:04:05.482: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2897 PodName:dns-2897 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 13:04:05.483: INFO: >>> kubeConfig: /root/.kube/config
I0821 13:04:05.545375      10 log.go:172] (0x40017b24d0) (0x4001c9edc0) Create stream
I0821 13:04:05.545525      10 log.go:172] (0x40017b24d0) (0x4001c9edc0) Stream added, broadcasting: 1
I0821 13:04:05.550408      10 log.go:172] (0x40017b24d0) Reply frame received for 1
I0821 13:04:05.550679      10 log.go:172] (0x40017b24d0) (0x400176fd60) Create stream
I0821 13:04:05.550806      10 log.go:172] (0x40017b24d0) (0x400176fd60) Stream added, broadcasting: 3
I0821 13:04:05.553392      10 log.go:172] (0x40017b24d0) Reply frame received for 3
I0821 13:04:05.553613      10 log.go:172] (0x40017b24d0) (0x40028e0000) Create stream
I0821 13:04:05.553732      10 log.go:172] (0x40017b24d0) (0x40028e0000) Stream added, broadcasting: 5
I0821 13:04:05.555420      10 log.go:172] (0x40017b24d0) Reply frame received for 5
I0821 13:04:05.625975      10 log.go:172] (0x40017b24d0) Data frame received for 3
I0821 13:04:05.626274      10 log.go:172] (0x400176fd60) (3) Data frame handling
I0821 13:04:05.626449      10 log.go:172] (0x400176fd60) (3) Data frame sent
I0821 13:04:05.626637      10 log.go:172] (0x40017b24d0) Data frame received for 5
I0821 13:04:05.626735      10 log.go:172] (0x40028e0000) (5) Data frame handling
I0821 13:04:05.627408      10 log.go:172] (0x40017b24d0) Data frame received for 3
I0821 13:04:05.627556      10 log.go:172] (0x400176fd60) (3) Data frame handling
I0821 13:04:05.628691      10 log.go:172] (0x40017b24d0) Data frame received for 1
I0821 13:04:05.628884      10 log.go:172] (0x4001c9edc0) (1) Data frame handling
I0821 13:04:05.628971      10 log.go:172] (0x4001c9edc0) (1) Data frame sent
I0821 13:04:05.629093      10 log.go:172] (0x40017b24d0) (0x4001c9edc0) Stream removed, broadcasting: 1
I0821 13:04:05.629253      10 log.go:172] (0x40017b24d0) Go away received
I0821 13:04:05.629625      10 log.go:172] (0x40017b24d0) (0x4001c9edc0) Stream removed, broadcasting: 1
I0821 13:04:05.629711      10 log.go:172] (0x40017b24d0) (0x400176fd60) Stream removed, broadcasting: 3
I0821 13:04:05.629783      10 log.go:172] (0x40017b24d0) (0x40028e0000) Stream removed, broadcasting: 5
Aug 21 13:04:05.630: INFO: Deleting pod dns-2897...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:04:05.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2897" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":202,"skipped":3422,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:04:05.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 21 13:04:10.608: INFO: Successfully updated pod "labelsupdate1de038a4-b8df-4373-b2b0-1ab79ce34fae"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:04:12.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7475" for this suite.

• [SLOW TEST:6.990 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3450,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:04:12.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 13:04:12.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42" in namespace "projected-5565" to be "Succeeded or Failed"
Aug 21 13:04:12.840: INFO: Pod "downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42": Phase="Pending", Reason="", readiness=false. Elapsed: 70.984625ms
Aug 21 13:04:14.847: INFO: Pod "downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078734961s
Aug 21 13:04:16.855: INFO: Pod "downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086586147s
Aug 21 13:04:18.862: INFO: Pod "downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093597299s
Aug 21 13:04:20.869: INFO: Pod "downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100419358s
STEP: Saw pod success
Aug 21 13:04:20.869: INFO: Pod "downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42" satisfied condition "Succeeded or Failed"
Aug 21 13:04:20.874: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42 container client-container: 
STEP: delete the pod
Aug 21 13:04:20.946: INFO: Waiting for pod downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42 to disappear
Aug 21 13:04:20.951: INFO: Pod downwardapi-volume-e91a5d25-8181-4e58-8e0e-6411a620dc42 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:04:20.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5565" for this suite.

• [SLOW TEST:8.267 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3453,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:04:20.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-e0569278-d16d-4c92-99d8-0aa29283b41f
STEP: Creating a pod to test consume secrets
Aug 21 13:04:21.085: INFO: Waiting up to 5m0s for pod "pod-secrets-238ff725-c8e9-4699-8820-4172b1bc03f7" in namespace "secrets-4697" to be "Succeeded or Failed"
Aug 21 13:04:21.104: INFO: Pod "pod-secrets-238ff725-c8e9-4699-8820-4172b1bc03f7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.426931ms
Aug 21 13:04:23.113: INFO: Pod "pod-secrets-238ff725-c8e9-4699-8820-4172b1bc03f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02706611s
Aug 21 13:04:25.120: INFO: Pod "pod-secrets-238ff725-c8e9-4699-8820-4172b1bc03f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034450611s
STEP: Saw pod success
Aug 21 13:04:25.120: INFO: Pod "pod-secrets-238ff725-c8e9-4699-8820-4172b1bc03f7" satisfied condition "Succeeded or Failed"
Aug 21 13:04:25.126: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-238ff725-c8e9-4699-8820-4172b1bc03f7 container secret-volume-test: 
STEP: delete the pod
Aug 21 13:04:25.152: INFO: Waiting for pod pod-secrets-238ff725-c8e9-4699-8820-4172b1bc03f7 to disappear
Aug 21 13:04:25.190: INFO: Pod pod-secrets-238ff725-c8e9-4699-8820-4172b1bc03f7 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:04:25.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4697" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3455,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:04:25.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:04:29.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1353" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3460,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:04:29.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-40812a60-64a0-4246-a503-3348184cbd54 in namespace container-probe-9366
Aug 21 13:04:33.508: INFO: Started pod busybox-40812a60-64a0-4246-a503-3348184cbd54 in namespace container-probe-9366
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 13:04:33.513: INFO: Initial restart count of pod busybox-40812a60-64a0-4246-a503-3348184cbd54 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:08:34.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9366" for this suite.

• [SLOW TEST:245.822 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3485,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:08:35.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 13:08:35.407: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 13:08:35.471: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 13:08:35.475: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 13:08:35.496: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 13:08:35.496: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 13:08:35.496: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 13:08:35.496: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 13:08:35.496: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 13:08:35.518: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 13:08:35.518: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 13:08:35.518: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 13:08:35.518: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162d4a767e8ad0cc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162d4a767f7af39e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:08:36.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8779" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":208,"skipped":3531,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:08:36.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Aug 21 13:08:36.819: INFO: Waiting up to 5m0s for pod "client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6" in namespace "containers-8081" to be "Succeeded or Failed"
Aug 21 13:08:36.895: INFO: Pod "client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6": Phase="Pending", Reason="", readiness=false. Elapsed: 75.733659ms
Aug 21 13:08:38.902: INFO: Pod "client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082612521s
Aug 21 13:08:41.350: INFO: Pod "client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531187464s
Aug 21 13:08:43.453: INFO: Pod "client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.633266175s
Aug 21 13:08:45.501: INFO: Pod "client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.682066714s
Aug 21 13:08:47.508: INFO: Pod "client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.688870282s
STEP: Saw pod success
Aug 21 13:08:47.509: INFO: Pod "client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6" satisfied condition "Succeeded or Failed"
Aug 21 13:08:47.513: INFO: Trying to get logs from node kali-worker pod client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6 container test-container: 
STEP: delete the pod
Aug 21 13:08:47.569: INFO: Waiting for pod client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6 to disappear
Aug 21 13:08:47.652: INFO: Pod client-containers-fa176bae-c7ef-4ecf-ab56-1d9686fcd0f6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:08:47.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8081" for this suite.

• [SLOW TEST:11.092 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3548,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:08:47.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 21 13:08:47.741: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 21 13:08:52.748: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:08:53.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8760" for this suite.

• [SLOW TEST:6.216 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":210,"skipped":3583,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:08:53.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 21 13:09:00.092: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:09:00.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2566" for this suite.

• [SLOW TEST:7.123 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3586,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:09:01.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-8399b49b-efd9-4ba7-b81b-422655147cb4
STEP: Creating a pod to test consume configMaps
Aug 21 13:09:01.887: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da" in namespace "projected-3377" to be "Succeeded or Failed"
Aug 21 13:09:02.054: INFO: Pod "pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da": Phase="Pending", Reason="", readiness=false. Elapsed: 166.243549ms
Aug 21 13:09:04.062: INFO: Pod "pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17464579s
Aug 21 13:09:06.209: INFO: Pod "pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da": Phase="Running", Reason="", readiness=true. Elapsed: 4.321186557s
Aug 21 13:09:08.432: INFO: Pod "pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da": Phase="Running", Reason="", readiness=true. Elapsed: 6.5445053s
Aug 21 13:09:10.440: INFO: Pod "pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.552782135s
STEP: Saw pod success
Aug 21 13:09:10.440: INFO: Pod "pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da" satisfied condition "Succeeded or Failed"
Aug 21 13:09:10.445: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 13:09:10.495: INFO: Waiting for pod pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da to disappear
Aug 21 13:09:10.607: INFO: Pod pod-projected-configmaps-46d65823-9a2c-46b6-b255-6c3c9f2180da no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:09:10.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3377" for this suite.

• [SLOW TEST:9.613 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3593,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:09:10.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7422
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug 21 13:09:11.057: INFO: Found 0 stateful pods, waiting for 3
Aug 21 13:09:21.101: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:09:21.101: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:09:21.101: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 21 13:09:31.067: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:09:31.067: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:09:31.067: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:09:31.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7422 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 13:09:32.636: INFO: stderr: "I0821 13:09:32.483763    3750 log.go:172] (0x400003a2c0) (0x400083a140) Create stream\nI0821 13:09:32.486484    3750 log.go:172] (0x400003a2c0) (0x400083a140) Stream added, broadcasting: 1\nI0821 13:09:32.499572    3750 log.go:172] (0x400003a2c0) Reply frame received for 1\nI0821 13:09:32.500883    3750 log.go:172] (0x400003a2c0) (0x400083a320) Create stream\nI0821 13:09:32.500984    3750 log.go:172] (0x400003a2c0) (0x400083a320) Stream added, broadcasting: 3\nI0821 13:09:32.502669    3750 log.go:172] (0x400003a2c0) Reply frame received for 3\nI0821 13:09:32.503062    3750 log.go:172] (0x400003a2c0) (0x4000827220) Create stream\nI0821 13:09:32.503172    3750 log.go:172] (0x400003a2c0) (0x4000827220) Stream added, broadcasting: 5\nI0821 13:09:32.504586    3750 log.go:172] (0x400003a2c0) Reply frame received for 5\nI0821 13:09:32.572605    3750 log.go:172] (0x400003a2c0) Data frame received for 5\nI0821 13:09:32.572912    3750 log.go:172] (0x4000827220) (5) Data frame handling\nI0821 13:09:32.573316    3750 log.go:172] (0x4000827220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 13:09:32.614083    3750 log.go:172] (0x400003a2c0) Data frame received for 5\nI0821 13:09:32.614224    3750 log.go:172] (0x4000827220) (5) Data frame handling\nI0821 13:09:32.614392    3750 log.go:172] (0x400003a2c0) Data frame received for 3\nI0821 13:09:32.614513    3750 log.go:172] (0x400083a320) (3) Data frame handling\nI0821 13:09:32.614688    3750 log.go:172] (0x400083a320) (3) Data frame sent\nI0821 13:09:32.614832    3750 log.go:172] (0x400003a2c0) Data frame received for 3\nI0821 13:09:32.614944    3750 log.go:172] (0x400083a320) (3) Data frame handling\nI0821 13:09:32.616122    3750 log.go:172] (0x400003a2c0) Data frame received for 1\nI0821 13:09:32.616250    3750 log.go:172] (0x400083a140) (1) Data frame handling\nI0821 13:09:32.616374    3750 log.go:172] (0x400083a140) (1) Data frame sent\nI0821 13:09:32.617549    3750 log.go:172] (0x400003a2c0) (0x400083a140) Stream removed, broadcasting: 1\nI0821 13:09:32.621621    3750 log.go:172] (0x400003a2c0) Go away received\nI0821 13:09:32.623959    3750 log.go:172] (0x400003a2c0) (0x400083a140) Stream removed, broadcasting: 1\nI0821 13:09:32.624847    3750 log.go:172] (0x400003a2c0) (0x400083a320) Stream removed, broadcasting: 3\nI0821 13:09:32.625370    3750 log.go:172] (0x400003a2c0) (0x4000827220) Stream removed, broadcasting: 5\n"
Aug 21 13:09:32.637: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 13:09:32.637: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 21 13:09:42.754: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 21 13:09:52.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7422 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 13:09:54.346: INFO: stderr: "I0821 13:09:54.252174    3773 log.go:172] (0x40009ca2c0) (0x40007e9360) Create stream\nI0821 13:09:54.254728    3773 log.go:172] (0x40009ca2c0) (0x40007e9360) Stream added, broadcasting: 1\nI0821 13:09:54.264853    3773 log.go:172] (0x40009ca2c0) Reply frame received for 1\nI0821 13:09:54.265701    3773 log.go:172] (0x40009ca2c0) (0x4000940000) Create stream\nI0821 13:09:54.265780    3773 log.go:172] (0x40009ca2c0) (0x4000940000) Stream added, broadcasting: 3\nI0821 13:09:54.267338    3773 log.go:172] (0x40009ca2c0) Reply frame received for 3\nI0821 13:09:54.267578    3773 log.go:172] (0x40009ca2c0) (0x40009400a0) Create stream\nI0821 13:09:54.267632    3773 log.go:172] (0x40009ca2c0) (0x40009400a0) Stream added, broadcasting: 5\nI0821 13:09:54.268795    3773 log.go:172] (0x40009ca2c0) Reply frame received for 5\nI0821 13:09:54.327902    3773 log.go:172] (0x40009ca2c0) Data frame received for 5\nI0821 13:09:54.328370    3773 log.go:172] (0x40009ca2c0) Data frame received for 1\nI0821 13:09:54.329165    3773 log.go:172] (0x40009ca2c0) Data frame received for 3\nI0821 13:09:54.329356    3773 log.go:172] (0x40007e9360) (1) Data frame handling\nI0821 13:09:54.329537    3773 log.go:172] (0x4000940000) (3) Data frame handling\nI0821 13:09:54.329762    3773 log.go:172] (0x40009400a0) (5) Data frame handling\nI0821 13:09:54.330391    3773 log.go:172] (0x4000940000) (3) Data frame sent\nI0821 13:09:54.330575    3773 log.go:172] (0x40009400a0) (5) Data frame sent\nI0821 13:09:54.330657    3773 log.go:172] (0x40009ca2c0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 13:09:54.331391    3773 log.go:172] (0x4000940000) (3) Data frame handling\nI0821 13:09:54.331617    3773 log.go:172] (0x40007e9360) (1) Data frame sent\nI0821 13:09:54.331799    3773 log.go:172] (0x40009ca2c0) Data frame received for 5\nI0821 13:09:54.331884    3773 log.go:172] (0x40009400a0) (5) Data frame handling\nI0821 13:09:54.333874    3773 log.go:172] (0x40009ca2c0) (0x40007e9360) Stream removed, broadcasting: 1\nI0821 13:09:54.335813    3773 log.go:172] (0x40009ca2c0) Go away received\nI0821 13:09:54.338253    3773 log.go:172] (0x40009ca2c0) (0x40007e9360) Stream removed, broadcasting: 1\nI0821 13:09:54.338620    3773 log.go:172] (0x40009ca2c0) (0x4000940000) Stream removed, broadcasting: 3\nI0821 13:09:54.338877    3773 log.go:172] (0x40009ca2c0) (0x40009400a0) Stream removed, broadcasting: 5\n"
Aug 21 13:09:54.347: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 13:09:54.348: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 13:10:24.915: INFO: Waiting for StatefulSet statefulset-7422/ss2 to complete update
STEP: Rolling back to a previous revision
Aug 21 13:10:34.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7422 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 21 13:10:37.060: INFO: stderr: "I0821 13:10:36.760224    3795 log.go:172] (0x4000bfa0b0) (0x40007e7400) Create stream\nI0821 13:10:36.763816    3795 log.go:172] (0x4000bfa0b0) (0x40007e7400) Stream added, broadcasting: 1\nI0821 13:10:36.774499    3795 log.go:172] (0x4000bfa0b0) Reply frame received for 1\nI0821 13:10:36.775080    3795 log.go:172] (0x4000bfa0b0) (0x40007e74a0) Create stream\nI0821 13:10:36.775142    3795 log.go:172] (0x4000bfa0b0) (0x40007e74a0) Stream added, broadcasting: 3\nI0821 13:10:36.776643    3795 log.go:172] (0x4000bfa0b0) Reply frame received for 3\nI0821 13:10:36.776958    3795 log.go:172] (0x4000bfa0b0) (0x4000a42000) Create stream\nI0821 13:10:36.777023    3795 log.go:172] (0x4000bfa0b0) (0x4000a42000) Stream added, broadcasting: 5\nI0821 13:10:36.778032    3795 log.go:172] (0x4000bfa0b0) Reply frame received for 5\nI0821 13:10:36.837697    3795 log.go:172] (0x4000bfa0b0) Data frame received for 5\nI0821 13:10:36.838054    3795 log.go:172] (0x4000a42000) (5) Data frame handling\nI0821 13:10:36.838869    3795 log.go:172] (0x4000a42000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0821 13:10:37.044719    3795 log.go:172] (0x4000bfa0b0) Data frame received for 5\nI0821 13:10:37.045024    3795 log.go:172] (0x4000a42000) (5) Data frame handling\nI0821 13:10:37.045291    3795 log.go:172] (0x4000bfa0b0) Data frame received for 3\nI0821 13:10:37.045462    3795 log.go:172] (0x4000bfa0b0) Data frame received for 1\nI0821 13:10:37.045621    3795 log.go:172] (0x40007e7400) (1) Data frame handling\nI0821 13:10:37.045753    3795 log.go:172] (0x40007e7400) (1) Data frame sent\nI0821 13:10:37.045849    3795 log.go:172] (0x40007e74a0) (3) Data frame handling\nI0821 13:10:37.045996    3795 log.go:172] (0x40007e74a0) (3) Data frame sent\nI0821 13:10:37.046127    3795 log.go:172] (0x4000bfa0b0) Data frame received for 3\nI0821 13:10:37.046251    3795 log.go:172] (0x40007e74a0) (3) Data frame handling\nI0821 13:10:37.047191    3795 log.go:172] (0x4000bfa0b0) (0x40007e7400) Stream removed, broadcasting: 1\nI0821 13:10:37.049651    3795 log.go:172] (0x4000bfa0b0) Go away received\nI0821 13:10:37.052674    3795 log.go:172] (0x4000bfa0b0) (0x40007e7400) Stream removed, broadcasting: 1\nI0821 13:10:37.053050    3795 log.go:172] (0x4000bfa0b0) (0x40007e74a0) Stream removed, broadcasting: 3\nI0821 13:10:37.053233    3795 log.go:172] (0x4000bfa0b0) (0x4000a42000) Stream removed, broadcasting: 5\n"
Aug 21 13:10:37.061: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 21 13:10:37.061: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 21 13:10:47.112: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 21 13:10:57.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7422 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 21 13:10:58.701: INFO: stderr: "I0821 13:10:58.563886    3817 log.go:172] (0x4000a86000) (0x400098e000) Create stream\nI0821 13:10:58.567339    3817 log.go:172] (0x4000a86000) (0x400098e000) Stream added, broadcasting: 1\nI0821 13:10:58.580712    3817 log.go:172] (0x4000a86000) Reply frame received for 1\nI0821 13:10:58.581426    3817 log.go:172] (0x4000a86000) (0x400081d180) Create stream\nI0821 13:10:58.581491    3817 log.go:172] (0x4000a86000) (0x400081d180) Stream added, broadcasting: 3\nI0821 13:10:58.582830    3817 log.go:172] (0x4000a86000) Reply frame received for 3\nI0821 13:10:58.583116    3817 log.go:172] (0x4000a86000) (0x400098e0a0) Create stream\nI0821 13:10:58.583180    3817 log.go:172] (0x4000a86000) (0x400098e0a0) Stream added, broadcasting: 5\nI0821 13:10:58.584481    3817 log.go:172] (0x4000a86000) Reply frame received for 5\nI0821 13:10:58.678415    3817 log.go:172] (0x4000a86000) Data frame received for 5\nI0821 13:10:58.679006    3817 log.go:172] (0x4000a86000) Data frame received for 1\nI0821 13:10:58.679184    3817 log.go:172] (0x400098e000) (1) Data frame handling\nI0821 13:10:58.679448    3817 log.go:172] (0x400098e0a0) (5) Data frame handling\nI0821 13:10:58.679763    3817 log.go:172] (0x4000a86000) Data frame received for 3\nI0821 13:10:58.679954    3817 log.go:172] (0x400081d180) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0821 13:10:58.682386    3817 log.go:172] (0x400098e0a0) (5) Data frame sent\nI0821 13:10:58.682556    3817 log.go:172] (0x400098e000) (1) Data frame sent\nI0821 13:10:58.682743    3817 log.go:172] (0x400081d180) (3) Data frame sent\nI0821 13:10:58.682955    3817 log.go:172] (0x4000a86000) Data frame received for 5\nI0821 13:10:58.683086    3817 log.go:172] (0x400098e0a0) (5) Data frame handling\nI0821 13:10:58.683324    3817 log.go:172] (0x4000a86000) Data frame received for 3\nI0821 13:10:58.683581    3817 log.go:172] (0x4000a86000) (0x400098e000) Stream removed, broadcasting: 1\nI0821 13:10:58.684526    3817 log.go:172] (0x400081d180) (3) Data frame handling\nI0821 13:10:58.686099    3817 log.go:172] (0x4000a86000) Go away received\nI0821 13:10:58.689897    3817 log.go:172] (0x4000a86000) (0x400098e000) Stream removed, broadcasting: 1\nI0821 13:10:58.690108    3817 log.go:172] (0x4000a86000) (0x400081d180) Stream removed, broadcasting: 3\nI0821 13:10:58.690275    3817 log.go:172] (0x4000a86000) (0x400098e0a0) Stream removed, broadcasting: 5\n"
Aug 21 13:10:58.701: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 21 13:10:58.702: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 21 13:11:08.946: INFO: Waiting for StatefulSet statefulset-7422/ss2 to complete update
Aug 21 13:11:08.946: INFO: Waiting for Pod statefulset-7422/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 13:11:08.946: INFO: Waiting for Pod statefulset-7422/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 13:11:08.946: INFO: Waiting for Pod statefulset-7422/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 13:11:18.962: INFO: Waiting for StatefulSet statefulset-7422/ss2 to complete update
Aug 21 13:11:18.962: INFO: Waiting for Pod statefulset-7422/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 13:11:18.962: INFO: Waiting for Pod statefulset-7422/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 13:11:28.954: INFO: Waiting for StatefulSet statefulset-7422/ss2 to complete update
Aug 21 13:11:28.954: INFO: Waiting for Pod statefulset-7422/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 13:11:28.955: INFO: Waiting for Pod statefulset-7422/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 21 13:11:38.963: INFO: Waiting for StatefulSet statefulset-7422/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 21 13:11:48.960: INFO: Deleting all statefulset in ns statefulset-7422
Aug 21 13:11:48.965: INFO: Scaling statefulset ss2 to 0
Aug 21 13:12:29.060: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 13:12:29.065: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:12:29.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7422" for this suite.

• [SLOW TEST:198.717 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":213,"skipped":3656,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:12:29.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 13:12:33.978: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 13:12:36.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612353, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612353, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612354, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612353, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:12:38.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612353, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612353, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612354, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612353, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:12:40.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612353, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612353, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612354, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612353, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 13:12:44.359: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:12:54.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4891" for this suite.
STEP: Destroying namespace "webhook-4891-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:25.411 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":214,"skipped":3692,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:12:54.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 21 13:12:54.844: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Aug 21 13:12:58.631: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 21 13:13:01.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:13:03.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:13:05.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612378, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:13:08.335: INFO: Waited 628.955294ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:13:10.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3181" for this suite.

• [SLOW TEST:15.257 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":215,"skipped":3700,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:13:10.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-5818a02f-7375-4f2e-a145-cacfcd623de2
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:13:16.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9971" for this suite.

• [SLOW TEST:6.760 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3716,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:13:16.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:13:17.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1348" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":217,"skipped":3733,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:13:17.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 21 13:13:17.584: INFO: Waiting up to 5m0s for pod "pod-20530f72-a520-4eb1-a2b5-aff62fc9a23b" in namespace "emptydir-7582" to be "Succeeded or Failed"
Aug 21 13:13:17.593: INFO: Pod "pod-20530f72-a520-4eb1-a2b5-aff62fc9a23b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780411ms
Aug 21 13:13:19.671: INFO: Pod "pod-20530f72-a520-4eb1-a2b5-aff62fc9a23b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087430265s
Aug 21 13:13:21.677: INFO: Pod "pod-20530f72-a520-4eb1-a2b5-aff62fc9a23b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092601121s
Aug 21 13:13:23.709: INFO: Pod "pod-20530f72-a520-4eb1-a2b5-aff62fc9a23b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125001748s
STEP: Saw pod success
Aug 21 13:13:23.709: INFO: Pod "pod-20530f72-a520-4eb1-a2b5-aff62fc9a23b" satisfied condition "Succeeded or Failed"
Aug 21 13:13:23.714: INFO: Trying to get logs from node kali-worker pod pod-20530f72-a520-4eb1-a2b5-aff62fc9a23b container test-container: 
STEP: delete the pod
Aug 21 13:13:23.901: INFO: Waiting for pod pod-20530f72-a520-4eb1-a2b5-aff62fc9a23b to disappear
Aug 21 13:13:23.911: INFO: Pod pod-20530f72-a520-4eb1-a2b5-aff62fc9a23b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:13:23.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7582" for this suite.

• [SLOW TEST:6.490 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3740,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:13:23.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 21 13:13:36.199: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 21 13:13:36.242: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 21 13:13:38.243: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 21 13:13:38.250: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 21 13:13:40.243: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 21 13:13:40.271: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:13:40.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6037" for this suite.

• [SLOW TEST:16.369 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3748,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:13:40.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 13:13:40.440: INFO: Creating deployment "webserver-deployment"
Aug 21 13:13:40.462: INFO: Waiting for observed generation 1
Aug 21 13:13:42.488: INFO: Waiting for all required pods to come up
Aug 21 13:13:42.499: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 21 13:13:58.516: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 21 13:13:58.527: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 21 13:13:58.540: INFO: Updating deployment webserver-deployment
Aug 21 13:13:58.541: INFO: Waiting for observed generation 2
Aug 21 13:14:00.834: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 21 13:14:00.842: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 21 13:14:00.847: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 21 13:14:00.862: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 21 13:14:00.862: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 21 13:14:00.866: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 21 13:14:00.874: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 21 13:14:00.874: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 21 13:14:00.885: INFO: Updating deployment webserver-deployment
Aug 21 13:14:00.885: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 21 13:14:01.166: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 21 13:14:01.833: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 21 13:14:05.644: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-1091 /apis/apps/v1/namespaces/deployment-1091/deployments/webserver-deployment 87704e95-5c14-42e5-b683-9421002136e0 2131663 3 2020-08-21 13:13:40 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-21 13:14:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40036a5ca8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-21 13:14:01 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-08-21 13:14:02 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 21 13:14:05.728: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-1091 /apis/apps/v1/namespaces/deployment-1091/replicasets/webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 2131657 3 2020-08-21 13:13:58 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 87704e95-5c14-42e5-b683-9421002136e0 0x4003810137 0x4003810138}] []  [{kube-controller-manager Update apps/v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 55 55 48 52 101 57 53 45 53 99 49 52 45 52 50 101 53 45 98 54 56 51 45 57 52 50 49 48 48 50 49 51 54 101 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40038101b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 13:14:05.728: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 21 13:14:05.730: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-1091 /apis/apps/v1/namespaces/deployment-1091/replicasets/webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 2131645 3 2020-08-21 13:13:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 87704e95-5c14-42e5-b683-9421002136e0 0x4003810217 0x4003810218}] []  [{kube-controller-manager Update apps/v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 55 55 48 52 101 57 53 45 53 99 49 52 45 52 50 101 53 45 98 54 56 51 45 57 52 50 49 48 48 50 49 51 54 101 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003810288  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 21 13:14:06.494: INFO: Pod "webserver-deployment-6676bcd6d4-4rcbv" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4rcbv webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-4rcbv 90d81f97-5e8e-40f3-98da-5a8f3e36e22d 2131673 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40036c1f37 0x40036c1f38}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.496: INFO: Pod "webserver-deployment-6676bcd6d4-624jw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-624jw webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-624jw 5c5bd90d-cee3-4b28-84ce-49b59ada9655 2131690 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f20e7 0x40062f20e8}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.497: INFO: Pod "webserver-deployment-6676bcd6d4-67b66" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-67b66 webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-67b66 59c9fa94-c172-494c-8e3c-58895a7b2b21 2131685 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f2297 0x40062f2298}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.499: INFO: Pod "webserver-deployment-6676bcd6d4-6gjbl" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6gjbl webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-6gjbl 2cbfc8e4-74cd-4737-8e1d-a8c98f6f785d 2131575 0 2020-08-21 13:13:58 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f2447 0x40062f2448}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:13:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.501: INFO: Pod "webserver-deployment-6676bcd6d4-7j2f5" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7j2f5 webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-7j2f5 c0ab4d26-0961-4431-808e-a69b53b3f15b 2131705 0 2020-08-21 13:14:02 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f25f7 0x40062f25f8}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.502: INFO: Pod "webserver-deployment-6676bcd6d4-9565v" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9565v webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-9565v 46359746-291a-4e2f-ade3-62a4d920a12c 2131576 0 2020-08-21 13:13:58 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f27a7 0x40062f27a8}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:13:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.503: INFO: Pod "webserver-deployment-6676bcd6d4-bbbzb" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bbbzb webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-bbbzb aedd685b-95cc-403d-93a9-b8a4beb051f1 2131550 0 2020-08-21 13:13:58 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f2957 0x40062f2958}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:13:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.505: INFO: Pod "webserver-deployment-6676bcd6d4-btz6c" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-btz6c webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-btz6c a805ede3-2c50-4671-815e-d3a3ef386a15 2131699 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f2b07 0x40062f2b08}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.506: INFO: Pod "webserver-deployment-6676bcd6d4-cpxzv" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cpxzv webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-cpxzv c54f4e63-f81f-4217-a31f-87832deac955 2131689 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f2cb7 0x40062f2cb8}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.508: INFO: Pod "webserver-deployment-6676bcd6d4-fcf9l" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fcf9l webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-fcf9l ff0de96f-6820-40f5-8899-bf5c0a45e4ee 2131660 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f2e77 0x40062f2e78}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.509: INFO: Pod "webserver-deployment-6676bcd6d4-n5zd9" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-n5zd9 webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-n5zd9 714aa6ba-8559-4b2b-a886-f0312feb31e5 2131694 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f3057 0x40062f3058}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.510: INFO: Pod "webserver-deployment-6676bcd6d4-rcfg7" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rcfg7 webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-rcfg7 a582b334-0a87-4e3d-9b3b-552a084d3879 2131715 0 2020-08-21 13:13:58 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f3207 0x40062f3208}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.169,StartTime:2020-08-21 13:13:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.512: INFO: Pod "webserver-deployment-6676bcd6d4-z6m9k" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-z6m9k webserver-deployment-6676bcd6d4- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-6676bcd6d4-z6m9k 5b262b96-972d-46c2-aedf-4e02f080657e 2131571 0 2020-08-21 13:13:58 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 8bb021cc-aa61-4512-a070-b022d2eddd3e 0x40062f33f7 0x40062f33f8}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 98 98 48 50 49 99 99 45 97 97 54 49 45 52 53 49 50 45 97 48 55 48 45 98 48 50 50 100 50 101 100 100 100 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:13:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.513: INFO: Pod "webserver-deployment-84855cf797-2kmxm" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-2kmxm webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-2kmxm 8a5a07e4-99ab-4246-8d01-b9c211e6dcbe 2131677 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x40062f35a7 0x40062f35a8}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.515: INFO: Pod "webserver-deployment-84855cf797-7g2nb" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-7g2nb webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-7g2nb f47f2b50-a716-483a-bb9a-97eb9b60c524 2131661 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x40062f3737 0x40062f3738}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.516: INFO: Pod "webserver-deployment-84855cf797-8d6ng" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-8d6ng webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-8d6ng 3582cb25-4fda-4e4c-9bcd-016c6fdeffb5 2131708 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x40062f38d7 0x40062f38d8}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.518: INFO: Pod "webserver-deployment-84855cf797-9brsf" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-9brsf webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-9brsf 9136177e-e173-4d1a-a7ad-8ddeb033a902 2131502 0 2020-08-21 13:13:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x40062f3a97 0x40062f3a98}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.168,StartTime:2020-08-21 13:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 13:13:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a392fffa1cbb43238a801866e59dd1cd81cadc8e73f4fd93940e525606640368,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.168,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.519: INFO: Pod "webserver-deployment-84855cf797-bwcjl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bwcjl webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-bwcjl a738d602-2581-449f-a135-967f88e3d43b 2131698 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x40062f3c67 0x40062f3c68}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.521: INFO: Pod "webserver-deployment-84855cf797-c7pnb" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-c7pnb webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-c7pnb 8accc4b9-5657-42b8-ad3b-dc87605e9686 2131681 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x40062f3e07 0x40062f3e08}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.522: INFO: Pod "webserver-deployment-84855cf797-cfglf" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cfglf webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-cfglf 7a7e3a3d-939c-435c-a344-4ac0db7201b6 2131678 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x40062f3fc7 0x40062f3fc8}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.523: INFO: Pod "webserver-deployment-84855cf797-flfrz" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-flfrz webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-flfrz 2b3ad223-137d-4a8a-bc30-ca7b1f3e681c 2131674 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x4003406227 0x4003406228}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.524: INFO: Pod "webserver-deployment-84855cf797-gp4td" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-gp4td webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-gp4td 2d75e5fc-5f7e-426d-8685-778611165c97 2131444 0 2020-08-21 13:13:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x4003406c47 0x4003406c48}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:49 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.242,StartTime:2020-08-21 13:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 13:13:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e7b0d5b73011c8c5d7ad1bc80a0704120eeeac7afaf81a1dfc42e328f6e5bc1f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.526: INFO: Pod "webserver-deployment-84855cf797-hr9pw" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hr9pw webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-hr9pw e1ae45e5-ae88-4184-80b6-21a11e96e039 2131519 0 2020-08-21 13:13:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x4003407057 0x4003407058}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.164,StartTime:2020-08-21 13:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 13:13:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3cf65825c9b172bb1e58b8dc95cf2ab83df4580c23cf0d1dd161d398b510a62a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.527: INFO: Pod "webserver-deployment-84855cf797-k9wlz" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-k9wlz webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-k9wlz e16431e2-aa0a-4036-96e0-682b75402862 2131505 0 2020-08-21 13:13:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x4003407207 0x4003407208}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.167,StartTime:2020-08-21 13:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 13:13:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9a6f9ae190bb185fd968f7feb539b03d2f7b74335f39ceb019057dcdb1126666,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.167,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.528: INFO: Pod "webserver-deployment-84855cf797-kc99j" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kc99j webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-kc99j b54bc86d-59c9-42e3-81b8-1bdb444f5ab1 2131509 0 2020-08-21 13:13:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x40034073b7 0x40034073b8}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.165,StartTime:2020-08-21 13:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 13:13:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8761f8e009aaa5878f7822d5288fed334b506cbd2f7d3c94db1591f5bcfe4e68,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.530: INFO: Pod "webserver-deployment-84855cf797-n95t8" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-n95t8 webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-n95t8 17960b8b-ce0c-483a-9b7f-79c8ce94e50c 2131693 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x4003407567 0x4003407568}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.531: INFO: Pod "webserver-deployment-84855cf797-ng75h" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ng75h webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-ng75h 541b883a-e2a0-4450-882a-cd70320653d9 2131472 0 2020-08-21 13:13:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x4003407707 0x4003407708}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:53 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.244,StartTime:2020-08-21 13:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 13:13:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b73c14bb5a30b229f46bf3948e02add2cc913d38fa57edca7ccc16216742af78,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.532: INFO: Pod "webserver-deployment-84855cf797-t266f" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-t266f webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-t266f 552b5ad9-a05f-4898-9d7b-cc97807bd1af 2131668 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x40034078b7 0x40034078b8}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.534: INFO: Pod "webserver-deployment-84855cf797-v6pmv" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-v6pmv webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-v6pmv a43ff0bd-2970-4081-90b6-4308efc927ee 2131512 0 2020-08-21 13:13:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x4003407a47 0x4003407a48}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.166,StartTime:2020-08-21 13:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 13:13:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://03242e3d9c7a20548e0b207e6f38087cfc10cf650350329c0545b064825dee80,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.166,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.535: INFO: Pod "webserver-deployment-84855cf797-x6ll6" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-x6ll6 webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-x6ll6 8ba6a91a-fbfb-46cd-a8ab-ec6cc6c24e71 2131667 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x4003407bf7 0x4003407bf8}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.536: INFO: Pod "webserver-deployment-84855cf797-xt8zd" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xt8zd webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-xt8zd eae0f00f-282b-4771-9138-5b5cff5f32ed 2131680 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x4003407d87 0x4003407d88}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.538: INFO: Pod "webserver-deployment-84855cf797-z8gpb" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-z8gpb webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-z8gpb 42348cae-3c6c-4c42-b9ef-53dda345489f 2131684 0 2020-08-21 13:14:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x400323e077 0x400323e078}] []  [{kube-controller-manager Update v1 2020-08-21 13:14:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:14:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:14:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:14:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 21 13:14:06.539: INFO: Pod "webserver-deployment-84855cf797-zjh8v" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-zjh8v webserver-deployment-84855cf797- deployment-1091 /api/v1/namespaces/deployment-1091/pods/webserver-deployment-84855cf797-zjh8v b6fefbae-66aa-4b71-beaa-21a95d9e531b 2131468 0 2020-08-21 13:13:40 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ecfca6ad-2fee-450d-9eb2-e00c8ec9f2c7 0x400323e207 0x400323e208}] []  [{kube-controller-manager Update v1 2020-08-21 13:13:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 99 102 99 97 54 97 100 45 50 102 101 101 45 52 53 48 100 45 57 101 98 50 45 101 48 48 99 56 101 99 57 102 50 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:13:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-582kr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-582kr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-582kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:13:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.245,StartTime:2020-08-21 13:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-21 13:13:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c4f12aab8ecc0d33f5cb6e108ae9b22d21a31b9ef15cbd4683edf0352a439c31,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:14:06.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1091" for this suite.

• [SLOW TEST:28.034 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":220,"skipped":3764,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:14:08.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 21 13:14:10.277: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-60 /api/v1/namespaces/watch-60/configmaps/e2e-watch-test-label-changed c2b3811c-dce7-4ec2-9530-f25111e8e643 2131739 0 2020-08-21 13:14:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 13:14:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:14:10.278: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-60 /api/v1/namespaces/watch-60/configmaps/e2e-watch-test-label-changed c2b3811c-dce7-4ec2-9530-f25111e8e643 2131740 0 2020-08-21 13:14:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 13:14:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:14:10.279: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-60 /api/v1/namespaces/watch-60/configmaps/e2e-watch-test-label-changed c2b3811c-dce7-4ec2-9530-f25111e8e643 2131743 0 2020-08-21 13:14:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 13:14:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 21 13:14:23.163: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-60 /api/v1/namespaces/watch-60/configmaps/e2e-watch-test-label-changed c2b3811c-dce7-4ec2-9530-f25111e8e643 2131856 0 2020-08-21 13:14:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 13:14:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:14:23.164: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-60 /api/v1/namespaces/watch-60/configmaps/e2e-watch-test-label-changed c2b3811c-dce7-4ec2-9530-f25111e8e643 2131858 0 2020-08-21 13:14:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 13:14:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:14:23.165: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-60 /api/v1/namespaces/watch-60/configmaps/e2e-watch-test-label-changed c2b3811c-dce7-4ec2-9530-f25111e8e643 2131862 0 2020-08-21 13:14:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-21 13:14:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:14:23.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-60" for this suite.

• [SLOW TEST:15.500 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":221,"skipped":3781,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:14:23.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-fpkb
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 13:14:26.077: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fpkb" in namespace "subpath-6220" to be "Succeeded or Failed"
Aug 21 13:14:26.149: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 71.481121ms
Aug 21 13:14:28.739: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.66119721s
Aug 21 13:14:30.978: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.900674326s
Aug 21 13:14:33.224: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.146907092s
Aug 21 13:14:35.442: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.3648587s
Aug 21 13:14:37.513: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.435197744s
Aug 21 13:14:39.576: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Running", Reason="", readiness=true. Elapsed: 13.498087777s
Aug 21 13:14:41.702: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Running", Reason="", readiness=true. Elapsed: 15.624336732s
Aug 21 13:14:43.842: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Running", Reason="", readiness=true. Elapsed: 17.764219802s
Aug 21 13:14:45.868: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Running", Reason="", readiness=true. Elapsed: 19.790907036s
Aug 21 13:14:48.171: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Running", Reason="", readiness=true. Elapsed: 22.093267944s
Aug 21 13:14:50.179: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Running", Reason="", readiness=true. Elapsed: 24.101359736s
Aug 21 13:14:52.187: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Running", Reason="", readiness=true. Elapsed: 26.109178976s
Aug 21 13:14:54.344: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Running", Reason="", readiness=true. Elapsed: 28.26649508s
Aug 21 13:14:56.707: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Running", Reason="", readiness=true. Elapsed: 30.629356075s
Aug 21 13:14:59.128: INFO: Pod "pod-subpath-test-downwardapi-fpkb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.050164251s
STEP: Saw pod success
Aug 21 13:14:59.128: INFO: Pod "pod-subpath-test-downwardapi-fpkb" satisfied condition "Succeeded or Failed"
Aug 21 13:14:59.133: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-fpkb container test-container-subpath-downwardapi-fpkb: 
STEP: delete the pod
Aug 21 13:15:00.184: INFO: Waiting for pod pod-subpath-test-downwardapi-fpkb to disappear
Aug 21 13:15:00.596: INFO: Pod pod-subpath-test-downwardapi-fpkb no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-fpkb
Aug 21 13:15:00.596: INFO: Deleting pod "pod-subpath-test-downwardapi-fpkb" in namespace "subpath-6220"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:15:00.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6220" for this suite.

• [SLOW TEST:38.457 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":222,"skipped":3801,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:15:02.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Aug 21 13:15:04.310: INFO: Waiting up to 5m0s for pod "client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669" in namespace "containers-2825" to be "Succeeded or Failed"
Aug 21 13:15:04.752: INFO: Pod "client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669": Phase="Pending", Reason="", readiness=false. Elapsed: 441.977465ms
Aug 21 13:15:06.760: INFO: Pod "client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.449791301s
Aug 21 13:15:09.099: INFO: Pod "client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669": Phase="Pending", Reason="", readiness=false. Elapsed: 4.789001242s
Aug 21 13:15:11.343: INFO: Pod "client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669": Phase="Pending", Reason="", readiness=false. Elapsed: 7.033524669s
Aug 21 13:15:13.565: INFO: Pod "client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669": Phase="Pending", Reason="", readiness=false. Elapsed: 9.255033039s
Aug 21 13:15:15.636: INFO: Pod "client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669": Phase="Pending", Reason="", readiness=false. Elapsed: 11.326523573s
Aug 21 13:15:17.923: INFO: Pod "client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.613414447s
STEP: Saw pod success
Aug 21 13:15:17.924: INFO: Pod "client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669" satisfied condition "Succeeded or Failed"
Aug 21 13:15:18.133: INFO: Trying to get logs from node kali-worker2 pod client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669 container test-container: 
STEP: delete the pod
Aug 21 13:15:18.658: INFO: Waiting for pod client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669 to disappear
Aug 21 13:15:18.911: INFO: Pod client-containers-e3ebce3f-a967-4bc4-90a9-92e85b8a0669 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:15:18.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2825" for this suite.

• [SLOW TEST:17.305 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3811,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:15:19.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 13:15:20.488: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 13:15:20.879: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 13:15:20.884: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 13:15:20.894: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 13:15:20.895: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 13:15:20.895: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 13:15:20.895: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 13:15:20.895: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 13:15:20.905: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 13:15:20.905: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 13:15:20.905: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 13:15:20.905: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c969358a-2daa-41d3-9d78-f597bd529c38 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-c969358a-2daa-41d3-9d78-f597bd529c38 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c969358a-2daa-41d3-9d78-f597bd529c38
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:20:35.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2320" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:316.150 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":224,"skipped":3825,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:20:35.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-cd66622b-7862-4a3e-a1a2-e3d5bff7c06e
STEP: Creating a pod to test consume secrets
Aug 21 13:20:35.885: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7d5c6b69-25be-4d66-bdbb-572ee7b8d9d4" in namespace "projected-2760" to be "Succeeded or Failed"
Aug 21 13:20:35.908: INFO: Pod "pod-projected-secrets-7d5c6b69-25be-4d66-bdbb-572ee7b8d9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.96536ms
Aug 21 13:20:38.060: INFO: Pod "pod-projected-secrets-7d5c6b69-25be-4d66-bdbb-572ee7b8d9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175174319s
Aug 21 13:20:40.068: INFO: Pod "pod-projected-secrets-7d5c6b69-25be-4d66-bdbb-572ee7b8d9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182842711s
Aug 21 13:20:42.074: INFO: Pod "pod-projected-secrets-7d5c6b69-25be-4d66-bdbb-572ee7b8d9d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188944503s
STEP: Saw pod success
Aug 21 13:20:42.074: INFO: Pod "pod-projected-secrets-7d5c6b69-25be-4d66-bdbb-572ee7b8d9d4" satisfied condition "Succeeded or Failed"
Aug 21 13:20:42.079: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-7d5c6b69-25be-4d66-bdbb-572ee7b8d9d4 container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 13:20:42.133: INFO: Waiting for pod pod-projected-secrets-7d5c6b69-25be-4d66-bdbb-572ee7b8d9d4 to disappear
Aug 21 13:20:42.147: INFO: Pod pod-projected-secrets-7d5c6b69-25be-4d66-bdbb-572ee7b8d9d4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:20:42.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2760" for this suite.

• [SLOW TEST:6.420 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3830,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:20:42.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 13:20:42.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-204a95df-8020-449e-8dd4-f9bd47051132" in namespace "projected-4512" to be "Succeeded or Failed"
Aug 21 13:20:42.371: INFO: Pod "downwardapi-volume-204a95df-8020-449e-8dd4-f9bd47051132": Phase="Pending", Reason="", readiness=false. Elapsed: 32.592139ms
Aug 21 13:20:44.381: INFO: Pod "downwardapi-volume-204a95df-8020-449e-8dd4-f9bd47051132": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042791317s
Aug 21 13:20:46.394: INFO: Pod "downwardapi-volume-204a95df-8020-449e-8dd4-f9bd47051132": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055234616s
STEP: Saw pod success
Aug 21 13:20:46.394: INFO: Pod "downwardapi-volume-204a95df-8020-449e-8dd4-f9bd47051132" satisfied condition "Succeeded or Failed"
Aug 21 13:20:46.400: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-204a95df-8020-449e-8dd4-f9bd47051132 container client-container: 
STEP: delete the pod
Aug 21 13:20:46.442: INFO: Waiting for pod downwardapi-volume-204a95df-8020-449e-8dd4-f9bd47051132 to disappear
Aug 21 13:20:46.452: INFO: Pod downwardapi-volume-204a95df-8020-449e-8dd4-f9bd47051132 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:20:46.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4512" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3830,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:20:46.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 21 13:20:51.170: INFO: Successfully updated pod "labelsupdate32afa3e9-3283-4ec3-8eb2-43cc4361a6e5"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:20:53.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3071" for this suite.

• [SLOW TEST:6.728 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3834,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:20:53.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 21 13:20:53.290: INFO: Waiting up to 5m0s for pod "downward-api-e1995c11-797c-46d6-a046-18450a84b2cc" in namespace "downward-api-7161" to be "Succeeded or Failed"
Aug 21 13:20:53.337: INFO: Pod "downward-api-e1995c11-797c-46d6-a046-18450a84b2cc": Phase="Pending", Reason="", readiness=false. Elapsed: 47.204737ms
Aug 21 13:20:55.343: INFO: Pod "downward-api-e1995c11-797c-46d6-a046-18450a84b2cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053081227s
Aug 21 13:20:57.349: INFO: Pod "downward-api-e1995c11-797c-46d6-a046-18450a84b2cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05922474s
STEP: Saw pod success
Aug 21 13:20:57.349: INFO: Pod "downward-api-e1995c11-797c-46d6-a046-18450a84b2cc" satisfied condition "Succeeded or Failed"
Aug 21 13:20:57.354: INFO: Trying to get logs from node kali-worker2 pod downward-api-e1995c11-797c-46d6-a046-18450a84b2cc container dapi-container: 
STEP: delete the pod
Aug 21 13:20:57.387: INFO: Waiting for pod downward-api-e1995c11-797c-46d6-a046-18450a84b2cc to disappear
Aug 21 13:20:57.392: INFO: Pod downward-api-e1995c11-797c-46d6-a046-18450a84b2cc no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:20:57.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7161" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3843,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:20:57.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-9254
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 13:20:57.476: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 21 13:20:57.561: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 13:20:59.678: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 21 13:21:01.588: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 13:21:03.574: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 13:21:05.570: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 13:21:07.568: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 13:21:09.569: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 21 13:21:11.568: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 21 13:21:11.576: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 13:21:13.583: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 13:21:15.584: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 21 13:21:17.583: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 21 13:21:21.647: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:8080/dial?request=hostname&protocol=udp&host=10.244.2.183&port=8081&tries=1'] Namespace:pod-network-test-9254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 13:21:21.647: INFO: >>> kubeConfig: /root/.kube/config
I0821 13:21:21.707397      10 log.go:172] (0x4002caa4d0) (0x4000ddf180) Create stream
I0821 13:21:21.707703      10 log.go:172] (0x4002caa4d0) (0x4000ddf180) Stream added, broadcasting: 1
I0821 13:21:21.712898      10 log.go:172] (0x4002caa4d0) Reply frame received for 1
I0821 13:21:21.713097      10 log.go:172] (0x4002caa4d0) (0x400164a140) Create stream
I0821 13:21:21.713181      10 log.go:172] (0x4002caa4d0) (0x400164a140) Stream added, broadcasting: 3
I0821 13:21:21.714707      10 log.go:172] (0x4002caa4d0) Reply frame received for 3
I0821 13:21:21.714857      10 log.go:172] (0x4002caa4d0) (0x400149b540) Create stream
I0821 13:21:21.714938      10 log.go:172] (0x4002caa4d0) (0x400149b540) Stream added, broadcasting: 5
I0821 13:21:21.716460      10 log.go:172] (0x4002caa4d0) Reply frame received for 5
I0821 13:21:21.795753      10 log.go:172] (0x4002caa4d0) Data frame received for 3
I0821 13:21:21.795926      10 log.go:172] (0x400164a140) (3) Data frame handling
I0821 13:21:21.796036      10 log.go:172] (0x400164a140) (3) Data frame sent
I0821 13:21:21.796123      10 log.go:172] (0x4002caa4d0) Data frame received for 3
I0821 13:21:21.796262      10 log.go:172] (0x4002caa4d0) Data frame received for 5
I0821 13:21:21.796411      10 log.go:172] (0x400149b540) (5) Data frame handling
I0821 13:21:21.796522      10 log.go:172] (0x400164a140) (3) Data frame handling
I0821 13:21:21.798644      10 log.go:172] (0x4002caa4d0) Data frame received for 1
I0821 13:21:21.798814      10 log.go:172] (0x4000ddf180) (1) Data frame handling
I0821 13:21:21.798945      10 log.go:172] (0x4000ddf180) (1) Data frame sent
I0821 13:21:21.799067      10 log.go:172] (0x4002caa4d0) (0x4000ddf180) Stream removed, broadcasting: 1
I0821 13:21:21.799213      10 log.go:172] (0x4002caa4d0) Go away received
I0821 13:21:21.799569      10 log.go:172] (0x4002caa4d0) (0x4000ddf180) Stream removed, broadcasting: 1
I0821 13:21:21.799687      10 log.go:172] (0x4002caa4d0) (0x400164a140) Stream removed, broadcasting: 3
I0821 13:21:21.799818      10 log.go:172] (0x4002caa4d0) (0x400149b540) Stream removed, broadcasting: 5
Aug 21 13:21:21.800: INFO: Waiting for responses: map[]
Aug 21 13:21:21.806: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.14:8080/dial?request=hostname&protocol=udp&host=10.244.1.13&port=8081&tries=1'] Namespace:pod-network-test-9254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 13:21:21.806: INFO: >>> kubeConfig: /root/.kube/config
I0821 13:21:21.865767      10 log.go:172] (0x4002caa790) (0x4000ddf5e0) Create stream
I0821 13:21:21.865905      10 log.go:172] (0x4002caa790) (0x4000ddf5e0) Stream added, broadcasting: 1
I0821 13:21:21.869687      10 log.go:172] (0x4002caa790) Reply frame received for 1
I0821 13:21:21.869939      10 log.go:172] (0x4002caa790) (0x40010f40a0) Create stream
I0821 13:21:21.870081      10 log.go:172] (0x4002caa790) (0x40010f40a0) Stream added, broadcasting: 3
I0821 13:21:21.871918      10 log.go:172] (0x4002caa790) Reply frame received for 3
I0821 13:21:21.872046      10 log.go:172] (0x4002caa790) (0x4002a3a0a0) Create stream
I0821 13:21:21.872121      10 log.go:172] (0x4002caa790) (0x4002a3a0a0) Stream added, broadcasting: 5
I0821 13:21:21.873906      10 log.go:172] (0x4002caa790) Reply frame received for 5
I0821 13:21:21.949351      10 log.go:172] (0x4002caa790) Data frame received for 3
I0821 13:21:21.949473      10 log.go:172] (0x40010f40a0) (3) Data frame handling
I0821 13:21:21.949582      10 log.go:172] (0x40010f40a0) (3) Data frame sent
I0821 13:21:21.949693      10 log.go:172] (0x4002caa790) Data frame received for 3
I0821 13:21:21.949760      10 log.go:172] (0x40010f40a0) (3) Data frame handling
I0821 13:21:21.949839      10 log.go:172] (0x4002caa790) Data frame received for 5
I0821 13:21:21.949911      10 log.go:172] (0x4002a3a0a0) (5) Data frame handling
I0821 13:21:21.951356      10 log.go:172] (0x4002caa790) Data frame received for 1
I0821 13:21:21.951498      10 log.go:172] (0x4000ddf5e0) (1) Data frame handling
I0821 13:21:21.951596      10 log.go:172] (0x4000ddf5e0) (1) Data frame sent
I0821 13:21:21.951692      10 log.go:172] (0x4002caa790) (0x4000ddf5e0) Stream removed, broadcasting: 1
I0821 13:21:21.951818      10 log.go:172] (0x4002caa790) Go away received
I0821 13:21:21.952111      10 log.go:172] (0x4002caa790) (0x4000ddf5e0) Stream removed, broadcasting: 1
I0821 13:21:21.952244      10 log.go:172] (0x4002caa790) (0x40010f40a0) Stream removed, broadcasting: 3
I0821 13:21:21.952388      10 log.go:172] (0x4002caa790) (0x4002a3a0a0) Stream removed, broadcasting: 5
Aug 21 13:21:21.952: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:21:21.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9254" for this suite.

• [SLOW TEST:24.560 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3851,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:21:21.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-7db26311-0ebd-4b6b-8939-dc8fb2ef6c57
STEP: Creating a pod to test consume secrets
Aug 21 13:21:22.083: INFO: Waiting up to 5m0s for pod "pod-secrets-d3a8aa0f-9339-44a7-9a3e-0dc55054361f" in namespace "secrets-4519" to be "Succeeded or Failed"
Aug 21 13:21:22.115: INFO: Pod "pod-secrets-d3a8aa0f-9339-44a7-9a3e-0dc55054361f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.890378ms
Aug 21 13:21:24.123: INFO: Pod "pod-secrets-d3a8aa0f-9339-44a7-9a3e-0dc55054361f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039221093s
Aug 21 13:21:26.130: INFO: Pod "pod-secrets-d3a8aa0f-9339-44a7-9a3e-0dc55054361f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04645482s
STEP: Saw pod success
Aug 21 13:21:26.130: INFO: Pod "pod-secrets-d3a8aa0f-9339-44a7-9a3e-0dc55054361f" satisfied condition "Succeeded or Failed"
Aug 21 13:21:26.136: INFO: Trying to get logs from node kali-worker pod pod-secrets-d3a8aa0f-9339-44a7-9a3e-0dc55054361f container secret-volume-test: 
STEP: delete the pod
Aug 21 13:21:26.194: INFO: Waiting for pod pod-secrets-d3a8aa0f-9339-44a7-9a3e-0dc55054361f to disappear
Aug 21 13:21:26.208: INFO: Pod pod-secrets-d3a8aa0f-9339-44a7-9a3e-0dc55054361f no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:21:26.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4519" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3858,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:21:26.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:21:26.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6876" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3883,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:21:26.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:21:37.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6657" for this suite.

• [SLOW TEST:11.262 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":232,"skipped":3902,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:21:37.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 13:21:40.940: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 13:21:43.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612900, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612900, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612901, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733612900, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 13:21:46.214: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:21:46.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-979" for this suite.
STEP: Destroying namespace "webhook-979-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.659 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":233,"skipped":3911,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:21:46.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-3ac9aab5-20d8-4f66-ade8-02367ab0b1e2
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:21:46.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3311" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":234,"skipped":3929,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:21:46.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-77d4c40b-6581-4e14-ac18-7374a9d8487d
STEP: Creating secret with name s-test-opt-upd-d55f96a0-933e-4664-895b-c608d1a4ea22
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-77d4c40b-6581-4e14-ac18-7374a9d8487d
STEP: Updating secret s-test-opt-upd-d55f96a0-933e-4664-895b-c608d1a4ea22
STEP: Creating secret with name s-test-opt-create-1795fe1b-6c98-429e-99e1-bc143a4a18c8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:21:54.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6636" for this suite.

• [SLOW TEST:8.278 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":3947,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:21:54.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 13:21:54.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:21:59.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7279" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":3977,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:21:59.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-7658
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7658
STEP: creating replication controller externalsvc in namespace services-7658
I0821 13:21:59.412398      10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7658, replica count: 2
I0821 13:22:02.463931      10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 13:22:05.464950      10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 21 13:22:05.604: INFO: Creating new exec pod
Aug 21 13:22:09.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-7658 execpodkg8cc -- /bin/sh -x -c nslookup nodeport-service'
Aug 21 13:22:13.837: INFO: stderr: "I0821 13:22:13.696717    3845 log.go:172] (0x40009ceb00) (0x400081b680) Create stream\nI0821 13:22:13.699572    3845 log.go:172] (0x40009ceb00) (0x400081b680) Stream added, broadcasting: 1\nI0821 13:22:13.709033    3845 log.go:172] (0x40009ceb00) Reply frame received for 1\nI0821 13:22:13.709690    3845 log.go:172] (0x40009ceb00) (0x40007e3540) Create stream\nI0821 13:22:13.709780    3845 log.go:172] (0x40009ceb00) (0x40007e3540) Stream added, broadcasting: 3\nI0821 13:22:13.711206    3845 log.go:172] (0x40009ceb00) Reply frame received for 3\nI0821 13:22:13.711458    3845 log.go:172] (0x40009ceb00) (0x400081b720) Create stream\nI0821 13:22:13.711513    3845 log.go:172] (0x40009ceb00) (0x400081b720) Stream added, broadcasting: 5\nI0821 13:22:13.712676    3845 log.go:172] (0x40009ceb00) Reply frame received for 5\nI0821 13:22:13.808313    3845 log.go:172] (0x40009ceb00) Data frame received for 5\nI0821 13:22:13.808573    3845 log.go:172] (0x400081b720) (5) Data frame handling\nI0821 13:22:13.809201    3845 log.go:172] (0x400081b720) (5) Data frame sent\n+ nslookup nodeport-service\nI0821 13:22:13.813576    3845 log.go:172] (0x40009ceb00) Data frame received for 3\nI0821 13:22:13.813682    3845 log.go:172] (0x40007e3540) (3) Data frame handling\nI0821 13:22:13.813778    3845 log.go:172] (0x40007e3540) (3) Data frame sent\nI0821 13:22:13.814651    3845 log.go:172] (0x40009ceb00) Data frame received for 3\nI0821 13:22:13.814798    3845 log.go:172] (0x40007e3540) (3) Data frame handling\nI0821 13:22:13.814905    3845 log.go:172] (0x40007e3540) (3) Data frame sent\nI0821 13:22:13.815071    3845 log.go:172] (0x40009ceb00) Data frame received for 5\nI0821 13:22:13.815203    3845 log.go:172] (0x400081b720) (5) Data frame handling\nI0821 13:22:13.815306    3845 log.go:172] (0x40009ceb00) Data frame received for 3\nI0821 13:22:13.815443    3845 log.go:172] (0x40007e3540) (3) Data frame handling\nI0821 13:22:13.817487    3845 log.go:172] (0x40009ceb00) Data frame received for 1\nI0821 13:22:13.817575    3845 log.go:172] (0x400081b680) (1) Data frame handling\nI0821 13:22:13.817691    3845 log.go:172] (0x400081b680) (1) Data frame sent\nI0821 13:22:13.818889    3845 log.go:172] (0x40009ceb00) (0x400081b680) Stream removed, broadcasting: 1\nI0821 13:22:13.821576    3845 log.go:172] (0x40009ceb00) Go away received\nI0821 13:22:13.826186    3845 log.go:172] (0x40009ceb00) (0x400081b680) Stream removed, broadcasting: 1\nI0821 13:22:13.826550    3845 log.go:172] (0x40009ceb00) (0x40007e3540) Stream removed, broadcasting: 3\nI0821 13:22:13.826797    3845 log.go:172] (0x40009ceb00) (0x400081b720) Stream removed, broadcasting: 5\n"
Aug 21 13:22:13.838: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7658.svc.cluster.local\tcanonical name = externalsvc.services-7658.svc.cluster.local.\nName:\texternalsvc.services-7658.svc.cluster.local\nAddress: 10.99.95.49\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7658, will wait for the garbage collector to delete the pods
Aug 21 13:22:13.904: INFO: Deleting ReplicationController externalsvc took: 8.785797ms
Aug 21 13:22:14.205: INFO: Terminating ReplicationController externalsvc pods took: 300.867861ms
Aug 21 13:22:29.286: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:22:29.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7658" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:30.315 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":237,"skipped":3986,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:22:29.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4766
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug 21 13:22:29.496: INFO: Found 0 stateful pods, waiting for 3
Aug 21 13:22:39.519: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:22:39.520: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:22:39.520: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 21 13:22:49.507: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:22:49.507: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:22:49.507: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 21 13:22:49.555: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 21 13:22:59.691: INFO: Updating stateful set ss2
Aug 21 13:22:59.741: INFO: Waiting for Pod statefulset-4766/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 21 13:23:10.470: INFO: Found 2 stateful pods, waiting for 3
Aug 21 13:23:20.480: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:23:20.480: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 13:23:20.480: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 21 13:23:20.518: INFO: Updating stateful set ss2
Aug 21 13:23:20.553: INFO: Waiting for Pod statefulset-4766/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 21 13:23:30.592: INFO: Updating stateful set ss2
Aug 21 13:23:30.638: INFO: Waiting for StatefulSet statefulset-4766/ss2 to complete update
Aug 21 13:23:30.639: INFO: Waiting for Pod statefulset-4766/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 21 13:23:40.655: INFO: Waiting for StatefulSet statefulset-4766/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 21 13:23:50.655: INFO: Deleting all statefulset in ns statefulset-4766
Aug 21 13:23:50.661: INFO: Scaling statefulset ss2 to 0
Aug 21 13:24:10.700: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 13:24:10.704: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:24:10.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4766" for this suite.

• [SLOW TEST:101.409 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":238,"skipped":4000,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:24:10.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 21 13:24:10.871: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:24:20.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4094" for this suite.

• [SLOW TEST:9.569 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":239,"skipped":4003,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:24:20.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-d35988fc-6bfc-4787-8636-7c33eaf3ee59
STEP: Creating a pod to test consume secrets
Aug 21 13:24:20.437: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c31206e3-d4d2-42ea-a19f-a13ccbc2e56f" in namespace "projected-4476" to be "Succeeded or Failed"
Aug 21 13:24:20.464: INFO: Pod "pod-projected-secrets-c31206e3-d4d2-42ea-a19f-a13ccbc2e56f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.782854ms
Aug 21 13:24:22.472: INFO: Pod "pod-projected-secrets-c31206e3-d4d2-42ea-a19f-a13ccbc2e56f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034619349s
Aug 21 13:24:24.512: INFO: Pod "pod-projected-secrets-c31206e3-d4d2-42ea-a19f-a13ccbc2e56f": Phase="Running", Reason="", readiness=true. Elapsed: 4.074667988s
Aug 21 13:24:26.520: INFO: Pod "pod-projected-secrets-c31206e3-d4d2-42ea-a19f-a13ccbc2e56f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082085835s
STEP: Saw pod success
Aug 21 13:24:26.520: INFO: Pod "pod-projected-secrets-c31206e3-d4d2-42ea-a19f-a13ccbc2e56f" satisfied condition "Succeeded or Failed"
Aug 21 13:24:26.525: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-c31206e3-d4d2-42ea-a19f-a13ccbc2e56f container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 13:24:26.816: INFO: Waiting for pod pod-projected-secrets-c31206e3-d4d2-42ea-a19f-a13ccbc2e56f to disappear
Aug 21 13:24:26.872: INFO: Pod pod-projected-secrets-c31206e3-d4d2-42ea-a19f-a13ccbc2e56f no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:24:26.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4476" for this suite.

• [SLOW TEST:6.653 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4004,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:24:26.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 21 13:24:27.092: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-a 135edae9-bd33-43e0-82a9-b194d1616c5c 2134590 0 2020-08-21 13:24:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 13:24:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:24:27.093: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-a 135edae9-bd33-43e0-82a9-b194d1616c5c 2134590 0 2020-08-21 13:24:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 13:24:27 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 21 13:24:37.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-a 135edae9-bd33-43e0-82a9-b194d1616c5c 2134628 0 2020-08-21 13:24:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 13:24:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:24:37.107: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-a 135edae9-bd33-43e0-82a9-b194d1616c5c 2134628 0 2020-08-21 13:24:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 13:24:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 21 13:24:47.161: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-a 135edae9-bd33-43e0-82a9-b194d1616c5c 2134659 0 2020-08-21 13:24:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 13:24:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:24:47.162: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-a 135edae9-bd33-43e0-82a9-b194d1616c5c 2134659 0 2020-08-21 13:24:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 13:24:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 21 13:24:57.173: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-a 135edae9-bd33-43e0-82a9-b194d1616c5c 2134690 0 2020-08-21 13:24:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 13:24:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:24:57.174: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-a 135edae9-bd33-43e0-82a9-b194d1616c5c 2134690 0 2020-08-21 13:24:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-21 13:24:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 21 13:25:07.182: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-b 6ba5539a-d6d4-4d13-ba99-47951d238eae 2134720 0 2020-08-21 13:25:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-21 13:25:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:25:07.183: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-b 6ba5539a-d6d4-4d13-ba99-47951d238eae 2134720 0 2020-08-21 13:25:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-21 13:25:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 21 13:25:17.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-b 6ba5539a-d6d4-4d13-ba99-47951d238eae 2134750 0 2020-08-21 13:25:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-21 13:25:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:25:17.196: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8649 /api/v1/namespaces/watch-8649/configmaps/e2e-watch-test-configmap-b 6ba5539a-d6d4-4d13-ba99-47951d238eae 2134750 0 2020-08-21 13:25:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-21 13:25:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:25:27.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8649" for this suite.

• [SLOW TEST:60.258 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":241,"skipped":4048,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:25:27.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0821 13:25:28.469012      10 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 13:25:28.469: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:25:28.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4601" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":242,"skipped":4057,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:25:28.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-729dff07-e24c-4f29-8a08-86c2d702bba0 in namespace container-probe-1108
Aug 21 13:25:34.685: INFO: Started pod busybox-729dff07-e24c-4f29-8a08-86c2d702bba0 in namespace container-probe-1108
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 13:25:34.690: INFO: Initial restart count of pod busybox-729dff07-e24c-4f29-8a08-86c2d702bba0 is 0
Aug 21 13:26:27.163: INFO: Restart count of pod container-probe-1108/busybox-729dff07-e24c-4f29-8a08-86c2d702bba0 is now 1 (52.47283873s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:26:27.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1108" for this suite.

• [SLOW TEST:58.755 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4062,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:26:27.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:26:27.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8488" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":244,"skipped":4077,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:26:27.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug 21 13:26:27.589: INFO: >>> kubeConfig: /root/.kube/config
Aug 21 13:26:47.751: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:27:57.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4393" for this suite.

• [SLOW TEST:90.028 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":245,"skipped":4091,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:27:57.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 21 13:28:06.253: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 21 13:28:06.260: INFO: Pod pod-with-prestop-http-hook still exists
Aug 21 13:28:08.260: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 21 13:28:08.267: INFO: Pod pod-with-prestop-http-hook still exists
Aug 21 13:28:10.260: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 21 13:28:10.268: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:28:10.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2971" for this suite.

• [SLOW TEST:12.806 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4181,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:28:10.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Aug 21 13:28:10.450: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2402" to be "Succeeded or Failed"
Aug 21 13:28:10.461: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.396736ms
Aug 21 13:28:12.702: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25238444s
Aug 21 13:28:14.709: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25894939s
Aug 21 13:28:16.715: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264958361s
STEP: Saw pod success
Aug 21 13:28:16.715: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug 21 13:28:16.792: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 21 13:28:17.186: INFO: Waiting for pod pod-host-path-test to disappear
Aug 21 13:28:17.213: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:28:17.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2402" for this suite.

• [SLOW TEST:6.919 seconds]
[sig-storage] HostPath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4196,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:28:17.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug 21 13:28:17.581: INFO: namespace kubectl-3448
Aug 21 13:28:17.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3448'
Aug 21 13:28:19.251: INFO: stderr: ""
Aug 21 13:28:19.251: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 21 13:28:20.258: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 13:28:20.258: INFO: Found 0 / 1
Aug 21 13:28:21.258: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 13:28:21.259: INFO: Found 0 / 1
Aug 21 13:28:22.257: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 13:28:22.257: INFO: Found 0 / 1
Aug 21 13:28:23.257: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 13:28:23.257: INFO: Found 1 / 1
Aug 21 13:28:23.257: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 21 13:28:23.262: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 21 13:28:23.262: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 21 13:28:23.262: INFO: wait on agnhost-master startup in kubectl-3448 
Aug 21 13:28:23.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config logs agnhost-master-2nsx5 agnhost-master --namespace=kubectl-3448'
Aug 21 13:28:24.510: INFO: stderr: ""
Aug 21 13:28:24.510: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 21 13:28:24.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3448'
Aug 21 13:28:25.786: INFO: stderr: ""
Aug 21 13:28:25.786: INFO: stdout: "service/rm2 exposed\n"
Aug 21 13:28:25.821: INFO: Service rm2 in namespace kubectl-3448 found.
STEP: exposing service
Aug 21 13:28:27.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3448'
Aug 21 13:28:29.141: INFO: stderr: ""
Aug 21 13:28:29.141: INFO: stdout: "service/rm3 exposed\n"
Aug 21 13:28:29.153: INFO: Service rm3 in namespace kubectl-3448 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:28:31.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3448" for this suite.

• [SLOW TEST:13.944 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":248,"skipped":4203,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:28:31.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-7aa7a1c5-1e35-4d4c-a979-d0e3b9a61a14
STEP: Creating a pod to test consume configMaps
Aug 21 13:28:31.263: INFO: Waiting up to 5m0s for pod "pod-configmaps-09b3d49e-1b86-45a0-ac2b-7b7a528ce8a2" in namespace "configmap-3383" to be "Succeeded or Failed"
Aug 21 13:28:31.274: INFO: Pod "pod-configmaps-09b3d49e-1b86-45a0-ac2b-7b7a528ce8a2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.366833ms
Aug 21 13:28:33.279: INFO: Pod "pod-configmaps-09b3d49e-1b86-45a0-ac2b-7b7a528ce8a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015063268s
Aug 21 13:28:35.890: INFO: Pod "pod-configmaps-09b3d49e-1b86-45a0-ac2b-7b7a528ce8a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.625930774s
Aug 21 13:28:37.895: INFO: Pod "pod-configmaps-09b3d49e-1b86-45a0-ac2b-7b7a528ce8a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.631298015s
STEP: Saw pod success
Aug 21 13:28:37.895: INFO: Pod "pod-configmaps-09b3d49e-1b86-45a0-ac2b-7b7a528ce8a2" satisfied condition "Succeeded or Failed"
Aug 21 13:28:37.908: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-09b3d49e-1b86-45a0-ac2b-7b7a528ce8a2 container configmap-volume-test: 
STEP: delete the pod
Aug 21 13:28:37.971: INFO: Waiting for pod pod-configmaps-09b3d49e-1b86-45a0-ac2b-7b7a528ce8a2 to disappear
Aug 21 13:28:37.986: INFO: Pod pod-configmaps-09b3d49e-1b86-45a0-ac2b-7b7a528ce8a2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:28:37.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3383" for this suite.

• [SLOW TEST:6.821 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4206,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:28:37.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-1009/configmap-test-4d0c3e0b-bb1b-447f-b184-c0648f0a6016
STEP: Creating a pod to test consume configMaps
Aug 21 13:28:38.255: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cd9ee2d-ac60-4313-beca-6866ba8e0c64" in namespace "configmap-1009" to be "Succeeded or Failed"
Aug 21 13:28:38.262: INFO: Pod "pod-configmaps-5cd9ee2d-ac60-4313-beca-6866ba8e0c64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.51217ms
Aug 21 13:28:40.266: INFO: Pod "pod-configmaps-5cd9ee2d-ac60-4313-beca-6866ba8e0c64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010513173s
Aug 21 13:28:42.272: INFO: Pod "pod-configmaps-5cd9ee2d-ac60-4313-beca-6866ba8e0c64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01643937s
STEP: Saw pod success
Aug 21 13:28:42.272: INFO: Pod "pod-configmaps-5cd9ee2d-ac60-4313-beca-6866ba8e0c64" satisfied condition "Succeeded or Failed"
Aug 21 13:28:42.276: INFO: Trying to get logs from node kali-worker pod pod-configmaps-5cd9ee2d-ac60-4313-beca-6866ba8e0c64 container env-test: 
STEP: delete the pod
Aug 21 13:28:42.319: INFO: Waiting for pod pod-configmaps-5cd9ee2d-ac60-4313-beca-6866ba8e0c64 to disappear
Aug 21 13:28:42.347: INFO: Pod pod-configmaps-5cd9ee2d-ac60-4313-beca-6866ba8e0c64 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:28:42.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1009" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4224,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:28:42.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 13:28:42.424: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 13:28:42.469: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 13:28:42.473: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 13:28:42.486: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 13:28:42.486: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 13:28:42.486: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 13:28:42.486: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 13:28:42.486: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 13:28:42.499: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 13:28:42.500: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 13:28:42.500: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 13:28:42.500: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Aug 21 13:28:42.619: INFO: Pod kindnet-kkxd5 requesting resource cpu=100m on Node kali-worker
Aug 21 13:28:42.620: INFO: Pod kindnet-qzfqb requesting resource cpu=100m on Node kali-worker2
Aug 21 13:28:42.620: INFO: Pod kube-proxy-c52ll requesting resource cpu=0m on Node kali-worker2
Aug 21 13:28:42.620: INFO: Pod kube-proxy-vn4t5 requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
Aug 21 13:28:42.620: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
Aug 21 13:28:42.630: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-85fd199c-8785-4bfb-b415-6de27575414f.162d4b8f890d6c38], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9722/filler-pod-85fd199c-8785-4bfb-b415-6de27575414f to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-85fd199c-8785-4bfb-b415-6de27575414f.162d4b8fd53e1876], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-85fd199c-8785-4bfb-b415-6de27575414f.162d4b902dc945f5], Reason = [Created], Message = [Created container filler-pod-85fd199c-8785-4bfb-b415-6de27575414f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-85fd199c-8785-4bfb-b415-6de27575414f.162d4b904678f2bf], Reason = [Started], Message = [Started container filler-pod-85fd199c-8785-4bfb-b415-6de27575414f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f07839fe-b1e5-446b-911f-f92e76826c7e.162d4b8f8ac502ae], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9722/filler-pod-f07839fe-b1e5-446b-911f-f92e76826c7e to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f07839fe-b1e5-446b-911f-f92e76826c7e.162d4b901bdbc7f3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f07839fe-b1e5-446b-911f-f92e76826c7e.162d4b9063d21dcb], Reason = [Created], Message = [Created container filler-pod-f07839fe-b1e5-446b-911f-f92e76826c7e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f07839fe-b1e5-446b-911f-f92e76826c7e.162d4b9072d7567c], Reason = [Started], Message = [Started container filler-pod-f07839fe-b1e5-446b-911f-f92e76826c7e]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162d4b90f2fc8fe1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162d4b90f5d95ed2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:28:49.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9722" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:7.433 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":251,"skipped":4231,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:28:49.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-2560
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2560 to expose endpoints map[]
Aug 21 13:28:49.922: INFO: successfully validated that service multi-endpoint-test in namespace services-2560 exposes endpoints map[] (13.10807ms elapsed)
STEP: Creating pod pod1 in namespace services-2560
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2560 to expose endpoints map[pod1:[100]]
Aug 21 13:28:54.148: INFO: successfully validated that service multi-endpoint-test in namespace services-2560 exposes endpoints map[pod1:[100]] (4.146706253s elapsed)
STEP: Creating pod pod2 in namespace services-2560
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2560 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 21 13:28:59.352: INFO: successfully validated that service multi-endpoint-test in namespace services-2560 exposes endpoints map[pod1:[100] pod2:[101]] (5.19785635s elapsed)
STEP: Deleting pod pod1 in namespace services-2560
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2560 to expose endpoints map[pod2:[101]]
Aug 21 13:28:59.424: INFO: successfully validated that service multi-endpoint-test in namespace services-2560 exposes endpoints map[pod2:[101]] (64.930487ms elapsed)
STEP: Deleting pod pod2 in namespace services-2560
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2560 to expose endpoints map[]
Aug 21 13:28:59.445: INFO: successfully validated that service multi-endpoint-test in namespace services-2560 exposes endpoints map[] (14.87148ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:28:59.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2560" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:9.928 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":252,"skipped":4242,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:28:59.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-1a49a66f-0f68-406b-aa5a-a95312012943
STEP: Creating a pod to test consume configMaps
Aug 21 13:28:59.887: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-683224cd-f5d7-4693-ad8b-5975799afa0b" in namespace "projected-2601" to be "Succeeded or Failed"
Aug 21 13:29:00.014: INFO: Pod "pod-projected-configmaps-683224cd-f5d7-4693-ad8b-5975799afa0b": Phase="Pending", Reason="", readiness=false. Elapsed: 125.981787ms
Aug 21 13:29:02.018: INFO: Pod "pod-projected-configmaps-683224cd-f5d7-4693-ad8b-5975799afa0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130734864s
Aug 21 13:29:04.023: INFO: Pod "pod-projected-configmaps-683224cd-f5d7-4693-ad8b-5975799afa0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135489344s
STEP: Saw pod success
Aug 21 13:29:04.023: INFO: Pod "pod-projected-configmaps-683224cd-f5d7-4693-ad8b-5975799afa0b" satisfied condition "Succeeded or Failed"
Aug 21 13:29:04.027: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-683224cd-f5d7-4693-ad8b-5975799afa0b container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 13:29:04.208: INFO: Waiting for pod pod-projected-configmaps-683224cd-f5d7-4693-ad8b-5975799afa0b to disappear
Aug 21 13:29:04.239: INFO: Pod pod-projected-configmaps-683224cd-f5d7-4693-ad8b-5975799afa0b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:29:04.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2601" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4249,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:29:04.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 21 13:29:04.353: INFO: Waiting up to 5m0s for pod "pod-fd843e62-1c55-49ef-ab2b-c20bf9063f03" in namespace "emptydir-9370" to be "Succeeded or Failed"
Aug 21 13:29:04.374: INFO: Pod "pod-fd843e62-1c55-49ef-ab2b-c20bf9063f03": Phase="Pending", Reason="", readiness=false. Elapsed: 21.131112ms
Aug 21 13:29:06.379: INFO: Pod "pod-fd843e62-1c55-49ef-ab2b-c20bf9063f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026138216s
Aug 21 13:29:08.384: INFO: Pod "pod-fd843e62-1c55-49ef-ab2b-c20bf9063f03": Phase="Running", Reason="", readiness=true. Elapsed: 4.031127519s
Aug 21 13:29:10.389: INFO: Pod "pod-fd843e62-1c55-49ef-ab2b-c20bf9063f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03635233s
STEP: Saw pod success
Aug 21 13:29:10.389: INFO: Pod "pod-fd843e62-1c55-49ef-ab2b-c20bf9063f03" satisfied condition "Succeeded or Failed"
Aug 21 13:29:10.393: INFO: Trying to get logs from node kali-worker pod pod-fd843e62-1c55-49ef-ab2b-c20bf9063f03 container test-container: 
STEP: delete the pod
Aug 21 13:29:10.413: INFO: Waiting for pod pod-fd843e62-1c55-49ef-ab2b-c20bf9063f03 to disappear
Aug 21 13:29:10.418: INFO: Pod pod-fd843e62-1c55-49ef-ab2b-c20bf9063f03 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:29:10.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9370" for this suite.

• [SLOW TEST:6.195 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4273,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:29:10.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Aug 21 13:29:10.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config cluster-info'
Aug 21 13:29:11.780: INFO: stderr: ""
Aug 21 13:29:11.780: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32915\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32915/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:29:11.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1138" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":255,"skipped":4364,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:29:11.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-088dfaeb-f704-4002-927e-dd17a3f44d12
STEP: Creating a pod to test consume secrets
Aug 21 13:29:11.904: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b267902b-6caf-443f-b0c7-12201895dd0c" in namespace "projected-8485" to be "Succeeded or Failed"
Aug 21 13:29:11.922: INFO: Pod "pod-projected-secrets-b267902b-6caf-443f-b0c7-12201895dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.53384ms
Aug 21 13:29:13.928: INFO: Pod "pod-projected-secrets-b267902b-6caf-443f-b0c7-12201895dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022987622s
Aug 21 13:29:15.934: INFO: Pod "pod-projected-secrets-b267902b-6caf-443f-b0c7-12201895dd0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029357918s
STEP: Saw pod success
Aug 21 13:29:15.934: INFO: Pod "pod-projected-secrets-b267902b-6caf-443f-b0c7-12201895dd0c" satisfied condition "Succeeded or Failed"
Aug 21 13:29:15.938: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-b267902b-6caf-443f-b0c7-12201895dd0c container secret-volume-test: 
STEP: delete the pod
Aug 21 13:29:15.966: INFO: Waiting for pod pod-projected-secrets-b267902b-6caf-443f-b0c7-12201895dd0c to disappear
Aug 21 13:29:15.988: INFO: Pod pod-projected-secrets-b267902b-6caf-443f-b0c7-12201895dd0c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:29:15.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8485" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4439,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:29:15.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:29:20.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8430" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4444,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:29:20.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9904
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-9904
STEP: creating replication controller externalsvc in namespace services-9904
I0821 13:29:20.457126      10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9904, replica count: 2
I0821 13:29:23.508290      10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 13:29:26.508988      10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 21 13:29:26.834: INFO: Creating new exec pod
Aug 21 13:29:30.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32915 --kubeconfig=/root/.kube/config exec --namespace=services-9904 execpodct8h5 -- /bin/sh -x -c nslookup clusterip-service'
Aug 21 13:29:32.259: INFO: stderr: "I0821 13:29:32.155980    4001 log.go:172] (0x40009f40b0) (0x4000813400) Create stream\nI0821 13:29:32.159605    4001 log.go:172] (0x40009f40b0) (0x4000813400) Stream added, broadcasting: 1\nI0821 13:29:32.170271    4001 log.go:172] (0x40009f40b0) Reply frame received for 1\nI0821 13:29:32.171826    4001 log.go:172] (0x40009f40b0) (0x4000974000) Create stream\nI0821 13:29:32.171947    4001 log.go:172] (0x40009f40b0) (0x4000974000) Stream added, broadcasting: 3\nI0821 13:29:32.174045    4001 log.go:172] (0x40009f40b0) Reply frame received for 3\nI0821 13:29:32.174680    4001 log.go:172] (0x40009f40b0) (0x40008134a0) Create stream\nI0821 13:29:32.174790    4001 log.go:172] (0x40009f40b0) (0x40008134a0) Stream added, broadcasting: 5\nI0821 13:29:32.176051    4001 log.go:172] (0x40009f40b0) Reply frame received for 5\nI0821 13:29:32.233871    4001 log.go:172] (0x40009f40b0) Data frame received for 5\nI0821 13:29:32.234004    4001 log.go:172] (0x40008134a0) (5) Data frame handling\nI0821 13:29:32.234304    4001 log.go:172] (0x40008134a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0821 13:29:32.240643    4001 log.go:172] (0x40009f40b0) Data frame received for 3\nI0821 13:29:32.240781    4001 log.go:172] (0x4000974000) (3) Data frame handling\nI0821 13:29:32.240856    4001 log.go:172] (0x4000974000) (3) Data frame sent\nI0821 13:29:32.242004    4001 log.go:172] (0x40009f40b0) Data frame received for 3\nI0821 13:29:32.242097    4001 log.go:172] (0x4000974000) (3) Data frame handling\nI0821 13:29:32.242188    4001 log.go:172] (0x4000974000) (3) Data frame sent\nI0821 13:29:32.242380    4001 log.go:172] (0x40009f40b0) Data frame received for 3\nI0821 13:29:32.242450    4001 log.go:172] (0x4000974000) (3) Data frame handling\nI0821 13:29:32.242539    4001 log.go:172] (0x40009f40b0) Data frame received for 5\nI0821 13:29:32.242616    4001 log.go:172] (0x40008134a0) (5) Data frame handling\nI0821 13:29:32.244073    4001 log.go:172] (0x40009f40b0) Data frame received for 1\nI0821 13:29:32.244154    4001 log.go:172] (0x4000813400) (1) Data frame handling\nI0821 13:29:32.244232    4001 log.go:172] (0x4000813400) (1) Data frame sent\nI0821 13:29:32.247041    4001 log.go:172] (0x40009f40b0) (0x4000813400) Stream removed, broadcasting: 1\nI0821 13:29:32.247834    4001 log.go:172] (0x40009f40b0) Go away received\nI0821 13:29:32.251425    4001 log.go:172] (0x40009f40b0) (0x4000813400) Stream removed, broadcasting: 1\nI0821 13:29:32.251729    4001 log.go:172] (0x40009f40b0) (0x4000974000) Stream removed, broadcasting: 3\nI0821 13:29:32.251913    4001 log.go:172] (0x40009f40b0) (0x40008134a0) Stream removed, broadcasting: 5\n"
Aug 21 13:29:32.260: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9904.svc.cluster.local\tcanonical name = externalsvc.services-9904.svc.cluster.local.\nName:\texternalsvc.services-9904.svc.cluster.local\nAddress: 10.108.140.97\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-9904, will wait for the garbage collector to delete the pods
Aug 21 13:29:32.321: INFO: Deleting ReplicationController externalsvc took: 5.698221ms
Aug 21 13:29:32.622: INFO: Terminating ReplicationController externalsvc pods took: 300.450272ms
Aug 21 13:29:49.331: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:29:49.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9904" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:29.231 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":258,"skipped":4458,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:29:49.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-6665db72-ac76-42c3-8635-8d5a8222489d
STEP: Creating secret with name s-test-opt-upd-6c2c47e8-7547-4412-b499-e91069f65b3d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-6665db72-ac76-42c3-8635-8d5a8222489d
STEP: Updating secret s-test-opt-upd-6c2c47e8-7547-4412-b499-e91069f65b3d
STEP: Creating secret with name s-test-opt-create-28bd6365-2215-4bab-9e0d-e18d950ca94f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:29:57.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8455" for this suite.

• [SLOW TEST:8.527 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4459,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:29:57.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 21 13:30:05.084: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7163 pod-service-account-b5bd2e04-2fdd-4c87-a34d-2b216129a376 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 21 13:30:06.576: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7163 pod-service-account-b5bd2e04-2fdd-4c87-a34d-2b216129a376 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 21 13:30:08.042: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7163 pod-service-account-b5bd2e04-2fdd-4c87-a34d-2b216129a376 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:30:09.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7163" for this suite.

• [SLOW TEST:11.776 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":260,"skipped":4476,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:30:09.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:30:16.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-680" for this suite.

• [SLOW TEST:6.406 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a read only busybox container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4488,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:30:16.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 21 13:30:21.280: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 21 13:30:24.323: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613420, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:30:26.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613420, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:30:28.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613420, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:30:30.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613421, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613420, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 21 13:30:33.399: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 13:30:33.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3551-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:30:34.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1979" for this suite.
STEP: Destroying namespace "webhook-1979-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.611 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":262,"skipped":4498,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:30:35.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-d134d8f2-7759-4c95-85d3-3f4e7fd60b5c in namespace container-probe-1471
Aug 21 13:30:42.438: INFO: Started pod test-webserver-d134d8f2-7759-4c95-85d3-3f4e7fd60b5c in namespace container-probe-1471
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 13:30:42.442: INFO: Initial restart count of pod test-webserver-d134d8f2-7759-4c95-85d3-3f4e7fd60b5c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:34:44.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1471" for this suite.

• [SLOW TEST:248.743 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4503,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:34:44.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 21 13:34:44.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ced874d2-1a7c-43a3-bfe2-b97762f7f296" in namespace "downward-api-9935" to be "Succeeded or Failed"
Aug 21 13:34:44.668: INFO: Pod "downwardapi-volume-ced874d2-1a7c-43a3-bfe2-b97762f7f296": Phase="Pending", Reason="", readiness=false. Elapsed: 17.366856ms
Aug 21 13:34:46.726: INFO: Pod "downwardapi-volume-ced874d2-1a7c-43a3-bfe2-b97762f7f296": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075927735s
Aug 21 13:34:48.822: INFO: Pod "downwardapi-volume-ced874d2-1a7c-43a3-bfe2-b97762f7f296": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171749351s
STEP: Saw pod success
Aug 21 13:34:48.822: INFO: Pod "downwardapi-volume-ced874d2-1a7c-43a3-bfe2-b97762f7f296" satisfied condition "Succeeded or Failed"
Aug 21 13:34:48.830: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ced874d2-1a7c-43a3-bfe2-b97762f7f296 container client-container: 
STEP: delete the pod
Aug 21 13:34:48.874: INFO: Waiting for pod downwardapi-volume-ced874d2-1a7c-43a3-bfe2-b97762f7f296 to disappear
Aug 21 13:34:48.883: INFO: Pod downwardapi-volume-ced874d2-1a7c-43a3-bfe2-b97762f7f296 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:34:48.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9935" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4533,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:34:48.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 21 13:34:49.012: INFO: Waiting up to 5m0s for pod "pod-fe4811a9-c266-45f5-b003-ffe0f8d24410" in namespace "emptydir-1936" to be "Succeeded or Failed"
Aug 21 13:34:49.032: INFO: Pod "pod-fe4811a9-c266-45f5-b003-ffe0f8d24410": Phase="Pending", Reason="", readiness=false. Elapsed: 19.628309ms
Aug 21 13:34:51.040: INFO: Pod "pod-fe4811a9-c266-45f5-b003-ffe0f8d24410": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027937286s
Aug 21 13:34:53.048: INFO: Pod "pod-fe4811a9-c266-45f5-b003-ffe0f8d24410": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03599358s
STEP: Saw pod success
Aug 21 13:34:53.048: INFO: Pod "pod-fe4811a9-c266-45f5-b003-ffe0f8d24410" satisfied condition "Succeeded or Failed"
Aug 21 13:34:53.053: INFO: Trying to get logs from node kali-worker pod pod-fe4811a9-c266-45f5-b003-ffe0f8d24410 container test-container: 
STEP: delete the pod
Aug 21 13:34:53.099: INFO: Waiting for pod pod-fe4811a9-c266-45f5-b003-ffe0f8d24410 to disappear
Aug 21 13:34:53.124: INFO: Pod pod-fe4811a9-c266-45f5-b003-ffe0f8d24410 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:34:53.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1936" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4533,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:34:53.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 21 13:34:53.223: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 13:34:53.275: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 13:34:53.280: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 21 13:34:53.304: INFO: kindnet-kkxd5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 13:34:53.304: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 13:34:53.304: INFO: kube-proxy-vn4t5 from kube-system started at 2020-08-15 09:40:28 +0000 UTC (1 container statuses recorded)
Aug 21 13:34:53.304: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 13:34:53.304: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 21 13:34:53.315: INFO: kube-proxy-c52ll from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 13:34:53.315: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 13:34:53.315: INFO: kindnet-qzfqb from kube-system started at 2020-08-15 09:40:30 +0000 UTC (1 container statuses recorded)
Aug 21 13:34:53.315: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c4453dba-7f1d-41f1-b02c-fdbf66dfc1b6 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-c4453dba-7f1d-41f1-b02c-fdbf66dfc1b6 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c4453dba-7f1d-41f1-b02c-fdbf66dfc1b6
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:35:11.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3488" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:19.312 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":266,"skipped":4554,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:35:12.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-57899b39-3937-468f-be97-b2aade75c760
STEP: Creating configMap with name cm-test-opt-upd-85e5491b-6896-4650-bc90-6123937a9bad
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-57899b39-3937-468f-be97-b2aade75c760
STEP: Updating configmap cm-test-opt-upd-85e5491b-6896-4650-bc90-6123937a9bad
STEP: Creating configMap with name cm-test-opt-create-58b3adf1-ca61-4933-97eb-00c75e608658
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:35:28.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7177" for this suite.

• [SLOW TEST:16.371 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4572,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:35:28.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:35:33.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5889" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":268,"skipped":4581,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:35:33.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-f3f8d554-f0ae-4913-8bc8-45643052a287
STEP: Creating a pod to test consume secrets
Aug 21 13:35:35.150: INFO: Waiting up to 5m0s for pod "pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d" in namespace "secrets-3367" to be "Succeeded or Failed"
Aug 21 13:35:35.399: INFO: Pod "pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d": Phase="Pending", Reason="", readiness=false. Elapsed: 249.315285ms
Aug 21 13:35:37.741: INFO: Pod "pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.59080363s
Aug 21 13:35:39.919: INFO: Pod "pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.769550304s
Aug 21 13:35:42.146: INFO: Pod "pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.996420456s
Aug 21 13:35:44.320: INFO: Pod "pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.170610438s
STEP: Saw pod success
Aug 21 13:35:44.321: INFO: Pod "pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d" satisfied condition "Succeeded or Failed"
Aug 21 13:35:44.378: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d container secret-volume-test: 
STEP: delete the pod
Aug 21 13:35:44.560: INFO: Waiting for pod pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d to disappear
Aug 21 13:35:44.568: INFO: Pod pod-secrets-ef401df5-4ab0-42e5-a074-a8c6621d554d no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:35:44.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3367" for this suite.

• [SLOW TEST:11.172 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4591,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:35:44.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 21 13:35:44.867: INFO: Creating deployment "test-recreate-deployment"
Aug 21 13:35:44.875: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 21 13:35:44.936: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 21 13:35:47.177: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 21 13:35:47.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613745, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613745, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613745, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613744, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:35:49.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613745, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613745, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613745, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733613744, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 13:35:51.187: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 21 13:35:51.202: INFO: Updating deployment test-recreate-deployment
Aug 21 13:35:51.202: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 21 13:35:51.850: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-9547 /apis/apps/v1/namespaces/deployment-9547/deployments/test-recreate-deployment 3d984641-d2e7-4aef-a7d6-b6c15af5a980 2137692 2 2020-08-21 13:35:44 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-21 13:35:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-21 13:35:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40049d46c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-21 13:35:51 +0000 UTC,LastTransitionTime:2020-08-21 13:35:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-08-21 13:35:51 +0000 UTC,LastTransitionTime:2020-08-21 13:35:44 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 21 13:35:51.861: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-9547 /apis/apps/v1/namespaces/deployment-9547/replicasets/test-recreate-deployment-d5667d9c7 fdc19345-0525-44d3-a3f4-963711fa8041 2137688 1 2020-08-21 13:35:51 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3d984641-d2e7-4aef-a7d6-b6c15af5a980 0x40049d4bd0 0x40049d4bd1}] []  [{kube-controller-manager Update apps/v1 2020-08-21 13:35:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 100 57 56 52 54 52 49 45 100 50 101 55 45 52 97 101 102 45 97 55 100 54 45 98 54 99 49 53 97 102 53 97 57 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40049d4c48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 13:35:51.862: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 21 13:35:51.863: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-9547 /apis/apps/v1/namespaces/deployment-9547/replicasets/test-recreate-deployment-74d98b5f7c 54130134-5032-41a7-b3e7-c5f6d8ec1381 2137679 2 2020-08-21 13:35:44 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3d984641-d2e7-4aef-a7d6-b6c15af5a980 0x40049d4ad7 0x40049d4ad8}] []  [{kube-controller-manager Update apps/v1 2020-08-21 13:35:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 100 57 56 52 54 52 49 45 100 50 101 55 45 52 97 101 102 45 97 55 100 54 45 98 54 99 49 53 97 102 53 97 57 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40049d4b68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 21 13:35:51.872: INFO: Pod "test-recreate-deployment-d5667d9c7-z7nss" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-z7nss test-recreate-deployment-d5667d9c7- deployment-9547 /api/v1/namespaces/deployment-9547/pods/test-recreate-deployment-d5667d9c7-z7nss 525f46c3-6439-4102-99c3-35a0e41d27d3 2137691 0 2020-08-21 13:35:51 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 fdc19345-0525-44d3-a3f4-963711fa8041 0x40049d5100 0x40049d5101}] []  [{kube-controller-manager Update v1 2020-08-21 13:35:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 100 99 49 57 51 52 53 45 48 53 50 53 45 52 52 100 51 45 97 51 102 52 45 57 54 51 55 49 49 102 97 56 48 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-21 13:35:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4snbm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4snbm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4snbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:35:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:35:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:35:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-21 13:35:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-08-21 13:35:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:35:51.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9547" for this suite.

• [SLOW TEST:7.307 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":270,"skipped":4602,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:35:51.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 21 13:35:58.754: INFO: Successfully updated pod "annotationupdate66d6db32-a54b-42cb-b1bd-3e862ce0205b"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:36:00.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4003" for this suite.

• [SLOW TEST:8.922 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4625,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:36:00.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:36:12.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5531" for this suite.

• [SLOW TEST:11.655 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":272,"skipped":4664,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:36:12.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 21 13:36:12.655: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3563 /api/v1/namespaces/watch-3563/configmaps/e2e-watch-test-resource-version 32bd834e-e932-4cef-8f35-f1b934b5cc91 2137822 0 2020-08-21 13:36:12 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-21 13:36:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 21 13:36:12.657: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3563 /api/v1/namespaces/watch-3563/configmaps/e2e-watch-test-resource-version 32bd834e-e932-4cef-8f35-f1b934b5cc91 2137823 0 2020-08-21 13:36:12 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-21 13:36:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:36:12.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3563" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":273,"skipped":4665,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 21 13:36:12.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-2364e8df-f78a-40d0-bce5-84a9b9fe3878
STEP: Creating a pod to test consume secrets
Aug 21 13:36:12.746: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-637cd8b9-5325-4443-9984-a98297ef985b" in namespace "projected-5571" to be "Succeeded or Failed"
Aug 21 13:36:12.786: INFO: Pod "pod-projected-secrets-637cd8b9-5325-4443-9984-a98297ef985b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.320711ms
Aug 21 13:36:14.792: INFO: Pod "pod-projected-secrets-637cd8b9-5325-4443-9984-a98297ef985b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045696252s
Aug 21 13:36:16.798: INFO: Pod "pod-projected-secrets-637cd8b9-5325-4443-9984-a98297ef985b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051822867s
STEP: Saw pod success
Aug 21 13:36:16.798: INFO: Pod "pod-projected-secrets-637cd8b9-5325-4443-9984-a98297ef985b" satisfied condition "Succeeded or Failed"
Aug 21 13:36:16.803: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-637cd8b9-5325-4443-9984-a98297ef985b container projected-secret-volume-test: 
STEP: delete the pod
Aug 21 13:36:16.842: INFO: Waiting for pod pod-projected-secrets-637cd8b9-5325-4443-9984-a98297ef985b to disappear
Aug 21 13:36:16.845: INFO: Pod pod-projected-secrets-637cd8b9-5325-4443-9984-a98297ef985b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 21 13:36:16.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5571" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4700,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
SSSSSSSSSSSSSSSSSAug 21 13:36:16.859: INFO: Running AfterSuite actions on all nodes
Aug 21 13:36:16.860: INFO: Running AfterSuite actions on node 1
Aug 21 13:36:16.860: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":274,"skipped":4717,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs  [Conformance] 
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1329

Ran 275 of 4992 Specs in 6014.820 seconds
FAIL! -- 274 Passed | 1 Failed | 0 Pending | 4717 Skipped
--- FAIL: TestE2E (6015.59s)
FAIL