I0828 12:56:37.747298 11 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0828 12:56:37.754413 11 e2e.go:124] Starting e2e run "d954fb53-acf7-4ebc-8e0d-160968c94da0" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598619384 - Will randomize all specs Will run 275 of 4992 specs Aug 28 12:56:38.318: INFO: >>> kubeConfig: /root/.kube/config Aug 28 12:56:38.375: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 28 12:56:38.564: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 28 12:56:38.772: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 28 12:56:38.773: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 28 12:56:38.773: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 28 12:56:38.827: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 28 12:56:38.827: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 28 12:56:38.828: INFO: e2e test version: v1.18.8 Aug 28 12:56:38.833: INFO: kube-apiserver version: v1.18.8 Aug 28 12:56:38.836: INFO: >>> kubeConfig: /root/.kube/config Aug 28 12:56:38.855: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:56:38.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition Aug 28 12:56:39.047: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 12:56:39.051: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:56:40.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4054" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:56:40.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 28 12:56:40.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-183b4f5f-491b-46ae-a694-e263274d7111" in namespace "projected-9998" to be "Succeeded or Failed" Aug 28 12:56:40.437: INFO: Pod "downwardapi-volume-183b4f5f-491b-46ae-a694-e263274d7111": Phase="Pending", Reason="", readiness=false. Elapsed: 43.922349ms Aug 28 12:56:42.514: INFO: Pod "downwardapi-volume-183b4f5f-491b-46ae-a694-e263274d7111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121380501s Aug 28 12:56:44.519: INFO: Pod "downwardapi-volume-183b4f5f-491b-46ae-a694-e263274d7111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126682755s Aug 28 12:56:46.526: INFO: Pod "downwardapi-volume-183b4f5f-491b-46ae-a694-e263274d7111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133285782s STEP: Saw pod success Aug 28 12:56:46.526: INFO: Pod "downwardapi-volume-183b4f5f-491b-46ae-a694-e263274d7111" satisfied condition "Succeeded or Failed" Aug 28 12:56:46.531: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-183b4f5f-491b-46ae-a694-e263274d7111 container client-container: STEP: delete the pod Aug 28 12:56:46.598: INFO: Waiting for pod downwardapi-volume-183b4f5f-491b-46ae-a694-e263274d7111 to disappear Aug 28 12:56:46.673: INFO: Pod downwardapi-volume-183b4f5f-491b-46ae-a694-e263274d7111 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:56:46.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9998" for this suite. • [SLOW TEST:6.456 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":19,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:56:46.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-681/configmap-test-b57ac44c-bba8-4637-bf12-346091f5a4a4 STEP: Creating a pod to test consume configMaps Aug 28 12:56:46.796: INFO: Waiting up to 5m0s for pod "pod-configmaps-9ea93f6c-4987-44e5-8301-006542b438de" in namespace "configmap-681" to be "Succeeded or Failed" Aug 28 12:56:46.961: INFO: Pod "pod-configmaps-9ea93f6c-4987-44e5-8301-006542b438de": Phase="Pending", Reason="", readiness=false. Elapsed: 165.322749ms Aug 28 12:56:48.969: INFO: Pod "pod-configmaps-9ea93f6c-4987-44e5-8301-006542b438de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173695893s Aug 28 12:56:50.977: INFO: Pod "pod-configmaps-9ea93f6c-4987-44e5-8301-006542b438de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.18118833s STEP: Saw pod success Aug 28 12:56:50.977: INFO: Pod "pod-configmaps-9ea93f6c-4987-44e5-8301-006542b438de" satisfied condition "Succeeded or Failed" Aug 28 12:56:50.982: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-9ea93f6c-4987-44e5-8301-006542b438de container env-test: STEP: delete the pod Aug 28 12:56:51.194: INFO: Waiting for pod pod-configmaps-9ea93f6c-4987-44e5-8301-006542b438de to disappear Aug 28 12:56:51.238: INFO: Pod pod-configmaps-9ea93f6c-4987-44e5-8301-006542b438de no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:56:51.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-681" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:56:51.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9222.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9222.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9222.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9222.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9222.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9222.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 28 12:57:01.646: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:01.650: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:01.655: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:01.659: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:01.668: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:01.670: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:01.673: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:01.677: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:01.685: INFO: Lookups using dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local] Aug 28 12:57:06.692: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:06.697: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:06.701: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:06.705: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:06.718: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:06.723: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:06.728: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:06.732: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:06.740: INFO: Lookups using dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local] Aug 28 12:57:11.691: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:11.695: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:11.699: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:11.702: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:11.711: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:11.714: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:11.716: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:11.719: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:11.726: INFO: Lookups using dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local] Aug 28 12:57:16.692: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:16.696: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:16.700: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:16.704: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:16.716: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:16.721: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:16.726: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:16.730: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:16.746: INFO: Lookups using dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local] Aug 28 12:57:21.692: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:21.695: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:21.699: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:21.702: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:21.713: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:21.716: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:21.720: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:21.921: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:21.976: INFO: Lookups using dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local] Aug 28 12:57:26.693: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:26.699: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:26.704: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:26.708: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:26.720: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:26.725: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:26.729: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:26.733: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local from pod dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814: the server could not find the requested resource (get pods dns-test-fa49472f-75a6-4f37-b918-60463ff94814) Aug 28 12:57:26.741: INFO: Lookups using dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9222.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9222.svc.cluster.local jessie_udp@dns-test-service-2.dns-9222.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9222.svc.cluster.local] Aug 28 12:57:32.167: INFO: DNS probes using dns-9222/dns-test-fa49472f-75a6-4f37-b918-60463ff94814 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:57:33.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9222" for this suite. • [SLOW TEST:42.322 seconds] [sig-network] DNS /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":4,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:57:33.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-14d5480f-d1df-4529-9fbd-0ca751ece632 STEP: Creating a pod to test consume secrets Aug 28 12:57:34.987: INFO: Waiting up to 5m0s for pod "pod-secrets-102636e0-cb4a-4f0b-a1ae-7ae189e57281" in namespace "secrets-4761" to be "Succeeded or Failed" Aug 28 12:57:35.271: INFO: Pod "pod-secrets-102636e0-cb4a-4f0b-a1ae-7ae189e57281": Phase="Pending", Reason="", readiness=false. Elapsed: 283.997477ms Aug 28 12:57:37.517: INFO: Pod "pod-secrets-102636e0-cb4a-4f0b-a1ae-7ae189e57281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.529988125s Aug 28 12:57:39.582: INFO: Pod "pod-secrets-102636e0-cb4a-4f0b-a1ae-7ae189e57281": Phase="Pending", Reason="", readiness=false. Elapsed: 4.59481048s Aug 28 12:57:41.590: INFO: Pod "pod-secrets-102636e0-cb4a-4f0b-a1ae-7ae189e57281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.602491697s STEP: Saw pod success Aug 28 12:57:41.590: INFO: Pod "pod-secrets-102636e0-cb4a-4f0b-a1ae-7ae189e57281" satisfied condition "Succeeded or Failed" Aug 28 12:57:41.596: INFO: Trying to get logs from node kali-worker pod pod-secrets-102636e0-cb4a-4f0b-a1ae-7ae189e57281 container secret-volume-test: STEP: delete the pod Aug 28 12:57:41.975: INFO: Waiting for pod pod-secrets-102636e0-cb4a-4f0b-a1ae-7ae189e57281 to disappear Aug 28 12:57:42.219: INFO: Pod pod-secrets-102636e0-cb4a-4f0b-a1ae-7ae189e57281 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:57:42.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4761" for this suite. STEP: Destroying namespace "secret-namespace-2785" for this suite. • [SLOW TEST:9.050 seconds] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:57:42.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 28 12:57:49.568: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:57:49.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3225" for this suite. • [SLOW TEST:7.095 seconds] [k8s.io] Container Runtime /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:57:49.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0828 12:58:30.079060 11 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 28 12:58:30.080: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:58:30.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4707" for this suite. • [SLOW TEST:40.358 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":7,"skipped":165,"failed":0} [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:58:30.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-64d7c762-097c-489f-bd86-9dc27e5997d3 in namespace container-probe-3530 Aug 28 12:58:40.931: INFO: Started pod liveness-64d7c762-097c-489f-bd86-9dc27e5997d3 in namespace container-probe-3530 STEP: checking the pod's current state and verifying that restartCount is present Aug 28 12:58:41.478: INFO: Initial restart count of pod liveness-64d7c762-097c-489f-bd86-9dc27e5997d3 is 0 Aug 28 12:59:05.444: INFO: Restart count of pod container-probe-3530/liveness-64d7c762-097c-489f-bd86-9dc27e5997d3 is now 1 (23.965206425s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:59:05.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3530" for this suite. • [SLOW TEST:35.795 seconds] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":165,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:59:05.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 28 12:59:06.411: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854" in namespace "downward-api-11" to be "Succeeded or Failed" Aug 28 12:59:06.967: INFO: Pod "downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854": Phase="Pending", Reason="", readiness=false. Elapsed: 555.8167ms Aug 28 12:59:09.035: INFO: Pod "downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854": Phase="Pending", Reason="", readiness=false. Elapsed: 2.624142073s Aug 28 12:59:11.058: INFO: Pod "downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854": Phase="Pending", Reason="", readiness=false. Elapsed: 4.647051541s Aug 28 12:59:13.089: INFO: Pod "downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854": Phase="Running", Reason="", readiness=true. Elapsed: 6.677492269s Aug 28 12:59:15.099: INFO: Pod "downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.687798801s STEP: Saw pod success Aug 28 12:59:15.099: INFO: Pod "downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854" satisfied condition "Succeeded or Failed" Aug 28 12:59:15.310: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854 container client-container: STEP: delete the pod Aug 28 12:59:16.332: INFO: Waiting for pod downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854 to disappear Aug 28 12:59:16.614: INFO: Pod downwardapi-volume-d46f59e5-6bf9-4f91-86d3-42c782a27854 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:59:16.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-11" for this suite. • [SLOW TEST:11.515 seconds] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":173,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:59:17.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-bb18d177-9b4e-4121-9902-b41efeca74ff STEP: Creating secret with name secret-projected-all-test-volume-0aa345ca-f55a-4cc6-bffd-9f3f0c965c7f STEP: Creating a pod to test Check all projections for projected volume plugin Aug 28 12:59:19.399: INFO: Waiting up to 5m0s for pod "projected-volume-79da0d77-1cac-406d-8836-cb9c07bdd515" in namespace "projected-1013" to be "Succeeded or Failed" Aug 28 12:59:19.405: INFO: Pod "projected-volume-79da0d77-1cac-406d-8836-cb9c07bdd515": Phase="Pending", Reason="", readiness=false. Elapsed: 5.334915ms Aug 28 12:59:22.122: INFO: Pod "projected-volume-79da0d77-1cac-406d-8836-cb9c07bdd515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.72275205s Aug 28 12:59:24.287: INFO: Pod "projected-volume-79da0d77-1cac-406d-8836-cb9c07bdd515": Phase="Pending", Reason="", readiness=false. Elapsed: 4.887821885s Aug 28 12:59:26.526: INFO: Pod "projected-volume-79da0d77-1cac-406d-8836-cb9c07bdd515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.127154808s STEP: Saw pod success Aug 28 12:59:26.527: INFO: Pod "projected-volume-79da0d77-1cac-406d-8836-cb9c07bdd515" satisfied condition "Succeeded or Failed" Aug 28 12:59:26.532: INFO: Trying to get logs from node kali-worker2 pod projected-volume-79da0d77-1cac-406d-8836-cb9c07bdd515 container projected-all-volume-test: STEP: delete the pod Aug 28 12:59:26.968: INFO: Waiting for pod projected-volume-79da0d77-1cac-406d-8836-cb9c07bdd515 to disappear Aug 28 12:59:27.357: INFO: Pod projected-volume-79da0d77-1cac-406d-8836-cb9c07bdd515 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 12:59:27.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1013" for this suite. • [SLOW TEST:10.518 seconds] [sig-storage] Projected combined /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":10,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 12:59:27.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-5fdcd443-7f67-4641-96a9-f013c65c957b in namespace container-probe-8880 Aug 28 12:59:34.887: INFO: Started pod busybox-5fdcd443-7f67-4641-96a9-f013c65c957b in namespace container-probe-8880 STEP: checking the pod's current state and verifying that restartCount is present Aug 28 12:59:34.891: INFO: Initial restart count of pod busybox-5fdcd443-7f67-4641-96a9-f013c65c957b is 0 Aug 28 13:00:34.094: INFO: Restart count of pod container-probe-8880/busybox-5fdcd443-7f67-4641-96a9-f013c65c957b is now 1 (59.202407983s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:00:34.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8880" for this suite. • [SLOW TEST:66.592 seconds] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":211,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:00:34.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-2fe108c9-f852-47df-8409-bbdfb2732e2b in namespace container-probe-4868 Aug 28 13:00:45.272: INFO: Started pod liveness-2fe108c9-f852-47df-8409-bbdfb2732e2b in namespace container-probe-4868 STEP: checking the pod's current state and verifying that restartCount is present Aug 28 13:00:45.277: INFO: Initial restart count of pod liveness-2fe108c9-f852-47df-8409-bbdfb2732e2b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:04:47.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4868" for this suite. • [SLOW TEST:253.080 seconds] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":213,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:04:47.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1340 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1340 STEP: Creating statefulset with conflicting port in namespace statefulset-1340 STEP: Waiting until pod test-pod will start running in namespace statefulset-1340 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1340 Aug 28 13:04:55.639: INFO: Observed stateful pod in namespace: statefulset-1340, name: ss-0, uid: 64e8e4cb-c55f-447d-bee6-bb4a1315cbd3, status phase: Failed. Waiting for statefulset controller to delete. Aug 28 13:04:55.871: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1340 STEP: Removing pod with conflicting port in namespace statefulset-1340 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1340 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 28 13:05:02.898: INFO: Deleting all statefulset in ns statefulset-1340 Aug 28 13:05:02.907: INFO: Scaling statefulset ss to 0 Aug 28 13:05:22.955: INFO: Waiting for statefulset status.replicas updated to 0 Aug 28 13:05:22.961: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:05:22.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1340" for this suite. • [SLOW TEST:35.392 seconds] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":13,"skipped":219,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:05:22.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Aug 28 13:05:23.101: INFO: Waiting up to 5m0s for pod "pod-3434bb31-502a-46d2-bdc8-13c7bee7b8a6" in namespace "emptydir-8263" to be "Succeeded or Failed" Aug 28 13:05:23.133: INFO: Pod "pod-3434bb31-502a-46d2-bdc8-13c7bee7b8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 31.18875ms Aug 28 13:05:25.140: INFO: Pod "pod-3434bb31-502a-46d2-bdc8-13c7bee7b8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03833825s Aug 28 13:05:27.147: INFO: Pod "pod-3434bb31-502a-46d2-bdc8-13c7bee7b8a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045373497s STEP: Saw pod success Aug 28 13:05:27.147: INFO: Pod "pod-3434bb31-502a-46d2-bdc8-13c7bee7b8a6" satisfied condition "Succeeded or Failed" Aug 28 13:05:27.152: INFO: Trying to get logs from node kali-worker pod pod-3434bb31-502a-46d2-bdc8-13c7bee7b8a6 container test-container: STEP: delete the pod Aug 28 13:05:27.363: INFO: Waiting for pod pod-3434bb31-502a-46d2-bdc8-13c7bee7b8a6 to disappear Aug 28 13:05:27.378: INFO: Pod pod-3434bb31-502a-46d2-bdc8-13c7bee7b8a6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:05:27.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8263" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":226,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:05:27.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-n466 STEP: Creating a pod to test atomic-volume-subpath Aug 28 13:05:27.825: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n466" in namespace "subpath-6376" to be "Succeeded or Failed" Aug 28 13:05:27.841: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Pending", Reason="", readiness=false. Elapsed: 16.278224ms Aug 28 13:05:29.850: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024576408s Aug 28 13:05:31.856: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030690817s Aug 28 13:05:33.884: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 6.059283146s Aug 28 13:05:35.893: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 8.067828552s Aug 28 13:05:37.898: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 10.072786856s Aug 28 13:05:40.230: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 12.404399893s Aug 28 13:05:42.287: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 14.462169527s Aug 28 13:05:44.520: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 16.694533699s Aug 28 13:05:46.526: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 18.701189827s Aug 28 13:05:48.533: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 20.707741886s Aug 28 13:05:50.543: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 22.717518889s Aug 28 13:05:52.549: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Running", Reason="", readiness=true. Elapsed: 24.724221298s Aug 28 13:05:54.558: INFO: Pod "pod-subpath-test-configmap-n466": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.732553065s STEP: Saw pod success Aug 28 13:05:54.558: INFO: Pod "pod-subpath-test-configmap-n466" satisfied condition "Succeeded or Failed" Aug 28 13:05:54.564: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-n466 container test-container-subpath-configmap-n466: STEP: delete the pod Aug 28 13:05:54.733: INFO: Waiting for pod pod-subpath-test-configmap-n466 to disappear Aug 28 13:05:54.741: INFO: Pod pod-subpath-test-configmap-n466 no longer exists STEP: Deleting pod pod-subpath-test-configmap-n466 Aug 28 13:05:54.742: INFO: Deleting pod "pod-subpath-test-configmap-n466" in namespace "subpath-6376" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:05:54.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6376" for this suite. • [SLOW TEST:27.282 seconds] [sig-storage] Subpath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":15,"skipped":233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:05:54.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-347d256b-fccb-461c-965d-5f91ba137b84 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:05:54.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5430" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":16,"skipped":256,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:05:54.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:05:55.112: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 28 13:05:55.166: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 28 13:06:00.302: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 28 13:06:00.305: INFO: Creating deployment "test-rolling-update-deployment" Aug 28 13:06:00.320: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 28 13:06:00.916: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set Aug 28 13:06:02.933: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 28 13:06:02.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216760, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216760, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216761, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216760, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 13:06:04.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216760, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216760, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216761, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216760, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 13:06:06.952: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 28 13:06:06.989: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3421 /apis/apps/v1/namespaces/deployment-3421/deployments/test-rolling-update-deployment 1ee82603-f91b-47cc-9663-ba714ab98a25 1750451 1 2020-08-28 13:06:00 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-08-28 13:06:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-28 13:06:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003479638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-28 13:06:00 +0000 UTC,LastTransitionTime:2020-08-28 13:06:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-08-28 13:06:05 +0000 UTC,LastTransitionTime:2020-08-28 13:06:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 28 13:06:07.004: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7 deployment-3421 /apis/apps/v1/namespaces/deployment-3421/replicasets/test-rolling-update-deployment-59d5cb45c7 c4368f31-9dea-4d27-95d5-a8274dbc0e25 1750437 1 2020-08-28 13:06:00 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 1ee82603-f91b-47cc-9663-ba714ab98a25 0x4000f4ac67 0x4000f4ac68}] [] [{kube-controller-manager Update apps/v1 2020-08-28 13:06:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 101 101 56 50 54 48 51 45 102 57 49 98 45 52 55 99 99 45 57 54 54 51 45 98 97 55 49 52 97 98 57 56 97 50 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4000f4acf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 28 13:06:07.005: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 28 13:06:07.007: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3421 /apis/apps/v1/namespaces/deployment-3421/replicasets/test-rolling-update-controller 8aaa6cfe-b496-46d9-9877-64fc36f82761 1750449 2 2020-08-28 13:05:55 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 1ee82603-f91b-47cc-9663-ba714ab98a25 0x4000f4ab3f 0x4000f4ab50}] [] [{e2e.test Update apps/v1 2020-08-28 13:05:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-28 13:06:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 101 101 56 50 54 48 51 45 102 57 49 98 45 52 55 99 99 45 57 54 54 51 45 98 97 55 49 52 97 98 57 56 97 50 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4000f4abe8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 28 13:06:07.031: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-twpd2" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-twpd2 test-rolling-update-deployment-59d5cb45c7- deployment-3421 /api/v1/namespaces/deployment-3421/pods/test-rolling-update-deployment-59d5cb45c7-twpd2 317441f9-7bf4-47f4-9bb3-a65aa1b12560 1750436 0 2020-08-28 13:06:00 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 c4368f31-9dea-4d27-95d5-a8274dbc0e25 0x40007fdfa7 0x40007fdfa8}] [] [{kube-controller-manager Update v1 2020-08-28 13:06:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 52 51 54 56 102 51 49 45 57 100 101 97 45 52 100 50 55 45 57 53 100 53 45 97 56 50 55 52 100 98 99 48 101 50 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 13:06:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5hpg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5hpg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5hpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 13:06:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 13:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 13:06:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 13:06:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.155,StartTime:2020-08-28 13:06:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 13:06:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://92cb97e3432636c2bbe7fdc2c252836aa92c27c489e6ab938e8a86f4b0f77db1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.155,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:06:07.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3421" for this suite. • [SLOW TEST:12.086 seconds] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":17,"skipped":265,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:06:07.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 28 13:06:13.172: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 28 13:06:15.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216773, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216773, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216774, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216772, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 13:06:17.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216773, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216773, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216774, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216772, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 28 13:06:20.539: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:06:20.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-253-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:06:21.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3303" for this suite. STEP: Destroying namespace "webhook-3303-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.867 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":18,"skipped":279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:06:21.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-4835e4a3-2d82-4b51-859a-f48fc9e34c3b STEP: Creating a pod to test consume configMaps Aug 28 13:06:22.100: INFO: Waiting up to 5m0s for pod "pod-configmaps-09ce5a92-ab04-40b9-93f4-00824c35b65e" in namespace "configmap-4229" to be "Succeeded or Failed" Aug 28 13:06:22.177: INFO: Pod "pod-configmaps-09ce5a92-ab04-40b9-93f4-00824c35b65e": Phase="Pending", Reason="", readiness=false. Elapsed: 76.850312ms Aug 28 13:06:24.306: INFO: Pod "pod-configmaps-09ce5a92-ab04-40b9-93f4-00824c35b65e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205693345s Aug 28 13:06:26.571: INFO: Pod "pod-configmaps-09ce5a92-ab04-40b9-93f4-00824c35b65e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470961416s Aug 28 13:06:28.937: INFO: Pod "pod-configmaps-09ce5a92-ab04-40b9-93f4-00824c35b65e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.837222416s STEP: Saw pod success Aug 28 13:06:28.938: INFO: Pod "pod-configmaps-09ce5a92-ab04-40b9-93f4-00824c35b65e" satisfied condition "Succeeded or Failed" Aug 28 13:06:29.187: INFO: Trying to get logs from node kali-worker pod pod-configmaps-09ce5a92-ab04-40b9-93f4-00824c35b65e container configmap-volume-test: STEP: delete the pod Aug 28 13:06:29.378: INFO: Waiting for pod pod-configmaps-09ce5a92-ab04-40b9-93f4-00824c35b65e to disappear Aug 28 13:06:29.447: INFO: Pod pod-configmaps-09ce5a92-ab04-40b9-93f4-00824c35b65e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:06:29.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4229" for this suite. • [SLOW TEST:7.542 seconds] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":325,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:06:29.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 28 13:06:29.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4583' Aug 28 13:06:44.643: INFO: stderr: "" Aug 28 13:06:44.643: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Aug 28 13:06:44.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4583' Aug 28 13:06:50.533: INFO: stderr: "" Aug 28 13:06:50.533: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:06:50.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4583" for this suite. • [SLOW TEST:21.857 seconds] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":20,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:06:51.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:06:52.149: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 28 13:06:52.296: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:06:52.365: INFO: Number of nodes with available pods: 0 Aug 28 13:06:52.365: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:06:53.703: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:06:54.468: INFO: Number of nodes with available pods: 0 Aug 28 13:06:54.468: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:06:55.384: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:06:55.994: INFO: Number of nodes with available pods: 0 Aug 28 13:06:55.994: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:06:56.528: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:06:56.550: INFO: Number of nodes with available pods: 0 Aug 28 13:06:56.550: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:06:57.550: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:06:57.782: INFO: Number of nodes with available pods: 0 Aug 28 13:06:57.782: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:06:58.560: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:06:58.733: INFO: Number of nodes with available pods: 0 Aug 28 13:06:58.734: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:06:59.492: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:06:59.498: INFO: Number of nodes with available pods: 0 Aug 28 13:06:59.498: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:07:00.587: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:00.970: INFO: Number of nodes with available pods: 2 Aug 28 13:07:00.970: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 28 13:07:01.911: INFO: Wrong image for pod: daemon-set-n6hft. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:01.911: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:01.994: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:03.006: INFO: Wrong image for pod: daemon-set-n6hft. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:03.006: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:03.014: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:04.003: INFO: Wrong image for pod: daemon-set-n6hft. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:04.003: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:04.019: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:05.003: INFO: Wrong image for pod: daemon-set-n6hft. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:05.003: INFO: Pod daemon-set-n6hft is not available Aug 28 13:07:05.003: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:05.013: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:06.003: INFO: Wrong image for pod: daemon-set-n6hft. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:06.003: INFO: Pod daemon-set-n6hft is not available Aug 28 13:07:06.003: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:06.009: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:07.134: INFO: Wrong image for pod: daemon-set-n6hft. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:07.134: INFO: Pod daemon-set-n6hft is not available Aug 28 13:07:07.134: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:07.185: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:08.235: INFO: Pod daemon-set-cxhtv is not available Aug 28 13:07:08.235: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:08.296: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:09.005: INFO: Pod daemon-set-cxhtv is not available Aug 28 13:07:09.005: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:09.015: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:10.005: INFO: Pod daemon-set-cxhtv is not available Aug 28 13:07:10.005: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:10.015: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:11.005: INFO: Pod daemon-set-cxhtv is not available Aug 28 13:07:11.005: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:11.014: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:12.002: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:12.012: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:13.413: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:13.500: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:14.004: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:14.013: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:15.190: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:15.190: INFO: Pod daemon-set-r9n2c is not available Aug 28 13:07:15.199: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:16.004: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:16.004: INFO: Pod daemon-set-r9n2c is not available Aug 28 13:07:16.014: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:17.054: INFO: Wrong image for pod: daemon-set-r9n2c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 28 13:07:17.054: INFO: Pod daemon-set-r9n2c is not available Aug 28 13:07:17.259: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:18.118: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:19.082: INFO: Pod daemon-set-8fkm9 is not available Aug 28 13:07:19.309: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 28 13:07:19.481: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:19.487: INFO: Number of nodes with available pods: 1 Aug 28 13:07:19.487: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:07:20.659: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:20.666: INFO: Number of nodes with available pods: 1 Aug 28 13:07:20.666: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:07:21.501: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:21.507: INFO: Number of nodes with available pods: 1 Aug 28 13:07:21.507: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:07:22.653: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:22.883: INFO: Number of nodes with available pods: 1 Aug 28 13:07:22.883: INFO: Node kali-worker is running more than one daemon pod Aug 28 13:07:23.749: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 28 13:07:24.247: INFO: Number of nodes with available pods: 2 Aug 28 13:07:24.248: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4671, will wait for the garbage collector to delete the pods Aug 28 13:07:24.421: INFO: Deleting DaemonSet.extensions daemon-set took: 10.77426ms Aug 28 13:07:24.924: INFO: Terminating DaemonSet.extensions daemon-set pods took: 502.983155ms Aug 28 13:07:37.844: INFO: Number of nodes with available pods: 0 Aug 28 13:07:37.844: INFO: Number of running nodes: 0, number of available pods: 0 Aug 28 13:07:37.854: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4671/daemonsets","resourceVersion":"1751069"},"items":null} Aug 28 13:07:37.860: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4671/pods","resourceVersion":"1751069"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:07:37.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4671" for this suite. • [SLOW TEST:46.561 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":21,"skipped":356,"failed":0} [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:07:37.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:07:37.969: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:07:42.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6145" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":356,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:07:42.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:07:42.733: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 28 13:07:44.721: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:07:45.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1090" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":23,"skipped":367,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:07:45.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-4016445b-88b2-4cb1-a48f-013bbdc7e12a STEP: Creating a pod to test consume secrets Aug 28 13:07:45.784: INFO: Waiting up to 5m0s for pod "pod-secrets-5508ef09-bfdf-4107-9346-dbac079d6d71" in namespace "secrets-4099" to be "Succeeded or Failed" Aug 28 13:07:46.419: INFO: Pod "pod-secrets-5508ef09-bfdf-4107-9346-dbac079d6d71": Phase="Pending", Reason="", readiness=false. Elapsed: 634.29894ms Aug 28 13:07:48.540: INFO: Pod "pod-secrets-5508ef09-bfdf-4107-9346-dbac079d6d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.754906236s Aug 28 13:07:50.609: INFO: Pod "pod-secrets-5508ef09-bfdf-4107-9346-dbac079d6d71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.824405222s Aug 28 13:07:52.648: INFO: Pod "pod-secrets-5508ef09-bfdf-4107-9346-dbac079d6d71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.863197557s STEP: Saw pod success Aug 28 13:07:52.648: INFO: Pod "pod-secrets-5508ef09-bfdf-4107-9346-dbac079d6d71" satisfied condition "Succeeded or Failed" Aug 28 13:07:52.879: INFO: Trying to get logs from node kali-worker pod pod-secrets-5508ef09-bfdf-4107-9346-dbac079d6d71 container secret-volume-test: STEP: delete the pod Aug 28 13:07:53.412: INFO: Waiting for pod pod-secrets-5508ef09-bfdf-4107-9346-dbac079d6d71 to disappear Aug 28 13:07:53.528: INFO: Pod pod-secrets-5508ef09-bfdf-4107-9346-dbac079d6d71 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:07:53.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4099" for this suite. • [SLOW TEST:8.321 seconds] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":368,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:07:53.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 28 13:07:54.185: INFO: Waiting up to 5m0s for pod "pod-bc07e9ab-2e34-4e99-b3d5-3064e6fdec02" in namespace "emptydir-895" to be "Succeeded or Failed" Aug 28 13:07:54.237: INFO: Pod "pod-bc07e9ab-2e34-4e99-b3d5-3064e6fdec02": Phase="Pending", Reason="", readiness=false. Elapsed: 51.070425ms Aug 28 13:07:56.244: INFO: Pod "pod-bc07e9ab-2e34-4e99-b3d5-3064e6fdec02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058261982s Aug 28 13:07:58.254: INFO: Pod "pod-bc07e9ab-2e34-4e99-b3d5-3064e6fdec02": Phase="Running", Reason="", readiness=true. Elapsed: 4.068124843s Aug 28 13:08:00.291: INFO: Pod "pod-bc07e9ab-2e34-4e99-b3d5-3064e6fdec02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105294673s STEP: Saw pod success Aug 28 13:08:00.291: INFO: Pod "pod-bc07e9ab-2e34-4e99-b3d5-3064e6fdec02" satisfied condition "Succeeded or Failed" Aug 28 13:08:00.297: INFO: Trying to get logs from node kali-worker pod pod-bc07e9ab-2e34-4e99-b3d5-3064e6fdec02 container test-container: STEP: delete the pod Aug 28 13:08:00.352: INFO: Waiting for pod pod-bc07e9ab-2e34-4e99-b3d5-3064e6fdec02 to disappear Aug 28 13:08:00.363: INFO: Pod pod-bc07e9ab-2e34-4e99-b3d5-3064e6fdec02 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:08:00.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-895" for this suite. • [SLOW TEST:6.809 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:08:00.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Aug 28 13:08:13.160: INFO: 5 pods remaining Aug 28 13:08:13.161: INFO: 5 pods has nil DeletionTimestamp Aug 28 13:08:13.161: INFO: STEP: Gathering metrics W0828 13:08:17.814164 11 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 28 13:08:17.814: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:08:17.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5487" for this suite. • [SLOW TEST:17.678 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":26,"skipped":392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:08:18.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:08:18.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 28 13:08:19.456: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T13:08:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-28T13:08:19Z]] name:name1 resourceVersion:1751502 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6b62d6f1-86de-46a3-a3bf-d01785727000] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 28 13:08:29.465: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T13:08:29Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-28T13:08:29Z]] name:name2 resourceVersion:1751618 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e2de23e3-61c0-4e1c-a5ce-0216c42f51f7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 28 13:08:39.479: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T13:08:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-28T13:08:39Z]] name:name1 resourceVersion:1751667 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6b62d6f1-86de-46a3-a3bf-d01785727000] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 28 13:08:49.491: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T13:08:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-28T13:08:49Z]] name:name2 resourceVersion:1751722 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e2de23e3-61c0-4e1c-a5ce-0216c42f51f7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 28 13:08:59.502: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T13:08:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-28T13:08:39Z]] name:name1 resourceVersion:1751756 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6b62d6f1-86de-46a3-a3bf-d01785727000] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 28 13:09:09.514: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-28T13:08:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-28T13:08:49Z]] name:name2 resourceVersion:1751808 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e2de23e3-61c0-4e1c-a5ce-0216c42f51f7] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:09:20.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5506" for this suite. • [SLOW TEST:61.993 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":27,"skipped":416,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:09:20.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-864c3276-a17b-4d63-82b4-947f6228322f STEP: Creating a pod to test consume configMaps Aug 28 13:09:20.138: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-464420e1-01eb-459b-9898-97ea13bf4e64" in namespace "projected-5489" to be "Succeeded or Failed" Aug 28 13:09:20.167: INFO: Pod "pod-projected-configmaps-464420e1-01eb-459b-9898-97ea13bf4e64": Phase="Pending", Reason="", readiness=false. Elapsed: 29.427573ms Aug 28 13:09:22.409: INFO: Pod "pod-projected-configmaps-464420e1-01eb-459b-9898-97ea13bf4e64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270813523s Aug 28 13:09:24.613: INFO: Pod "pod-projected-configmaps-464420e1-01eb-459b-9898-97ea13bf4e64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475200713s Aug 28 13:09:26.619: INFO: Pod "pod-projected-configmaps-464420e1-01eb-459b-9898-97ea13bf4e64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.481087882s STEP: Saw pod success Aug 28 13:09:26.619: INFO: Pod "pod-projected-configmaps-464420e1-01eb-459b-9898-97ea13bf4e64" satisfied condition "Succeeded or Failed" Aug 28 13:09:26.707: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-464420e1-01eb-459b-9898-97ea13bf4e64 container projected-configmap-volume-test: STEP: delete the pod Aug 28 13:09:27.084: INFO: Waiting for pod pod-projected-configmaps-464420e1-01eb-459b-9898-97ea13bf4e64 to disappear Aug 28 13:09:27.088: INFO: Pod pod-projected-configmaps-464420e1-01eb-459b-9898-97ea13bf4e64 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:09:27.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5489" for this suite. • [SLOW TEST:7.047 seconds] [sig-storage] Projected configMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":417,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:09:27.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:09:27.209: INFO: Waiting up to 5m0s for pod "busybox-user-65534-dfe45630-a56c-44bf-9b3a-dce448881239" in namespace "security-context-test-7496" to be "Succeeded or Failed" Aug 28 13:09:27.218: INFO: Pod "busybox-user-65534-dfe45630-a56c-44bf-9b3a-dce448881239": Phase="Pending", Reason="", readiness=false. Elapsed: 9.143278ms Aug 28 13:09:29.223: INFO: Pod "busybox-user-65534-dfe45630-a56c-44bf-9b3a-dce448881239": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014120029s Aug 28 13:09:31.468: INFO: Pod "busybox-user-65534-dfe45630-a56c-44bf-9b3a-dce448881239": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258837258s Aug 28 13:09:33.668: INFO: Pod "busybox-user-65534-dfe45630-a56c-44bf-9b3a-dce448881239": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458911138s Aug 28 13:09:35.687: INFO: Pod "busybox-user-65534-dfe45630-a56c-44bf-9b3a-dce448881239": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.477840671s Aug 28 13:09:35.687: INFO: Pod "busybox-user-65534-dfe45630-a56c-44bf-9b3a-dce448881239" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:09:35.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7496" for this suite. • [SLOW TEST:8.598 seconds] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsUser /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:09:35.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Aug 28 13:09:36.273: INFO: Waiting up to 5m0s for pod "var-expansion-9a18b618-41bb-40bc-86a7-287fde77fa92" in namespace "var-expansion-6822" to be "Succeeded or Failed" Aug 28 13:09:36.277: INFO: Pod "var-expansion-9a18b618-41bb-40bc-86a7-287fde77fa92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.40209ms Aug 28 13:09:38.415: INFO: Pod "var-expansion-9a18b618-41bb-40bc-86a7-287fde77fa92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141824659s Aug 28 13:09:40.421: INFO: Pod "var-expansion-9a18b618-41bb-40bc-86a7-287fde77fa92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148129614s STEP: Saw pod success Aug 28 13:09:40.421: INFO: Pod "var-expansion-9a18b618-41bb-40bc-86a7-287fde77fa92" satisfied condition "Succeeded or Failed" Aug 28 13:09:40.425: INFO: Trying to get logs from node kali-worker2 pod var-expansion-9a18b618-41bb-40bc-86a7-287fde77fa92 container dapi-container: STEP: delete the pod Aug 28 13:09:41.566: INFO: Waiting for pod var-expansion-9a18b618-41bb-40bc-86a7-287fde77fa92 to disappear Aug 28 13:09:41.929: INFO: Pod var-expansion-9a18b618-41bb-40bc-86a7-287fde77fa92 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:09:41.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6822" for this suite. • [SLOW TEST:6.772 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":451,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:09:42.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:09:43.234: INFO: Creating deployment "test-recreate-deployment" Aug 28 13:09:43.498: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 28 13:09:43.735: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 28 13:09:46.313: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 28 13:09:46.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216984, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 13:09:48.668: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216984, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 13:09:50.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216984, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 13:09:52.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216984, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734216983, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 28 13:09:54.668: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 28 13:09:54.682: INFO: Updating deployment test-recreate-deployment Aug 28 13:09:54.682: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 28 13:09:56.919: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7434 /apis/apps/v1/namespaces/deployment-7434/deployments/test-recreate-deployment 2777d488-a78f-4e00-95d3-50d36e1e2996 1752146 2 2020-08-28 13:09:43 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-28 13:09:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-28 13:09:56 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4000f91778 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-28 13:09:56 +0000 UTC,LastTransitionTime:2020-08-28 13:09:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-08-28 13:09:56 +0000 UTC,LastTransitionTime:2020-08-28 13:09:43 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 28 13:09:56.928: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-7434 /apis/apps/v1/namespaces/deployment-7434/replicasets/test-recreate-deployment-d5667d9c7 2890539f-d0f8-4806-8b5e-9e5acadcd6c3 1752143 1 2020-08-28 13:09:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 2777d488-a78f-4e00-95d3-50d36e1e2996 0x4000f91f30 0x4000f91f31}] [] [{kube-controller-manager Update apps/v1 2020-08-28 13:09:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 55 55 100 52 56 56 45 97 55 56 102 45 52 101 48 48 45 57 53 100 51 45 53 48 100 51 54 101 49 101 50 57 57 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40008880a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 28 13:09:56.928: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 28 13:09:56.929: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c deployment-7434 /apis/apps/v1/namespaces/deployment-7434/replicasets/test-recreate-deployment-74d98b5f7c 0faa5e7c-3a9b-43c5-8fb0-a052c28c8caa 1752128 2 2020-08-28 13:09:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 2777d488-a78f-4e00-95d3-50d36e1e2996 0x4000f91de7 0x4000f91de8}] [] [{kube-controller-manager Update apps/v1 2020-08-28 13:09:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 55 55 100 52 56 56 45 97 55 56 102 45 52 101 48 48 45 57 53 100 51 45 53 48 100 51 54 101 49 101 50 57 57 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4000f91ec8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 28 13:09:56.936: INFO: Pod "test-recreate-deployment-d5667d9c7-bwbq7" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-bwbq7 test-recreate-deployment-d5667d9c7- deployment-7434 /api/v1/namespaces/deployment-7434/pods/test-recreate-deployment-d5667d9c7-bwbq7 65bc5b40-e7da-4683-8f6f-6f58db3f3f41 1752147 0 2020-08-28 13:09:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 2890539f-d0f8-4806-8b5e-9e5acadcd6c3 0x4000888b10 0x4000888b11}] [] [{kube-controller-manager Update v1 2020-08-28 13:09:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 56 57 48 53 51 57 102 45 100 48 102 56 45 52 56 48 54 45 56 98 53 101 45 57 101 53 97 99 97 100 99 100 54 99 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 13:09:56 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wx5gx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wx5gx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wx5gx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 13:09:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 13:09:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 13:09:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 13:09:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-28 13:09:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:09:56.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7434" for this suite. • [SLOW TEST:14.476 seconds] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":31,"skipped":456,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:09:56.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 28 13:09:58.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351" in namespace "projected-2072" to be "Succeeded or Failed" Aug 28 13:09:58.391: INFO: Pod "downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351": Phase="Pending", Reason="", readiness=false. Elapsed: 221.981197ms Aug 28 13:10:00.458: INFO: Pod "downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2897018s Aug 28 13:10:03.245: INFO: Pod "downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351": Phase="Pending", Reason="", readiness=false. Elapsed: 5.076044994s Aug 28 13:10:05.254: INFO: Pod "downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084854673s Aug 28 13:10:07.319: INFO: Pod "downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351": Phase="Running", Reason="", readiness=true. Elapsed: 9.15020641s Aug 28 13:10:09.326: INFO: Pod "downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.157027064s STEP: Saw pod success Aug 28 13:10:09.326: INFO: Pod "downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351" satisfied condition "Succeeded or Failed" Aug 28 13:10:09.330: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351 container client-container: STEP: delete the pod Aug 28 13:10:09.371: INFO: Waiting for pod downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351 to disappear Aug 28 13:10:09.402: INFO: Pod downwardapi-volume-0427811f-7340-4991-b90d-19c38fe58351 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:10:09.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2072" for this suite. • [SLOW TEST:12.464 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:10:09.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 28 13:10:18.367: INFO: Successfully updated pod "labelsupdatea76b772f-faea-44ae-acf3-18d6e400720a" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:10:20.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3761" for this suite. • [SLOW TEST:11.096 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":502,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:10:20.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:10:39.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1855" for this suite. • [SLOW TEST:18.865 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":34,"skipped":521,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:10:39.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:10:39.610: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:10:51.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4297" for this suite. • [SLOW TEST:12.449 seconds] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:10:51.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 28 13:10:51.964: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 13:10:51.998: INFO: Waiting for terminating namespaces to be deleted... Aug 28 13:10:52.008: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 28 13:10:52.050: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded) Aug 28 13:10:52.051: INFO: Container kube-proxy ready: true, restart count 0 Aug 28 13:10:52.051: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded) Aug 28 13:10:52.051: INFO: Container app ready: true, restart count 0 Aug 28 13:10:52.051: INFO: rally-368d8daa-r6zz777o from c-rally-368d8daa-xtt80xko started at 2020-08-28 13:10:40 +0000 UTC (1 container statuses recorded) Aug 28 13:10:52.051: INFO: Container rally-368d8daa-r6zz777o ready: true, restart count 0 Aug 28 13:10:52.051: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded) Aug 28 13:10:52.051: INFO: Container kindnet-cni ready: true, restart count 0 Aug 28 13:10:52.051: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 28 13:10:52.067: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded) Aug 28 13:10:52.068: INFO: Container kindnet-cni ready: true, restart count 0 Aug 28 13:10:52.068: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded) Aug 28 13:10:52.068: INFO: Container kube-proxy ready: true, restart count 0 Aug 28 13:10:52.068: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded) Aug 28 13:10:52.068: INFO: Container app ready: true, restart count 0 Aug 28 13:10:52.068: INFO: pod-exec-websocket-f08df288-b62e-422b-8f74-902860a1524a from pods-4297 started at 2020-08-28 13:10:39 +0000 UTC (1 container statuses recorded) Aug 28 13:10:52.068: INFO: Container main ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4da5761a-6268-4166-b476-1ca016482067 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-4da5761a-6268-4166-b476-1ca016482067 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4da5761a-6268-4166-b476-1ca016482067 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:16:02.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3801" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:310.735 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":36,"skipped":569,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:16:02.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:16:02.672: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:16:09.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4350" for this suite. • [SLOW TEST:7.111 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":37,"skipped":583,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:16:09.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:16:21.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7164" for this suite. • [SLOW TEST:11.469 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":38,"skipped":584,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:16:21.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0828 13:16:22.034074 11 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 28 13:16:22.034: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:16:22.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7744" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":39,"skipped":593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:16:22.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 28 13:16:23.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-611' Aug 28 13:16:24.873: INFO: stderr: "" Aug 28 13:16:24.874: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 28 13:16:34.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-611 -o json' Aug 28 13:16:37.145: INFO: stderr: "" Aug 28 13:16:37.145: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-28T13:16:24Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-28T13:16:24Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.204\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-28T13:16:31Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-611\",\n \"resourceVersion\": \"1753758\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-611/pods/e2e-test-httpd-pod\",\n \"uid\": \"94224ea3-4147-4e97-8823-6bf105faa489\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5gbvw\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5gbvw\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5gbvw\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-28T13:16:25Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-28T13:16:30Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-28T13:16:30Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-28T13:16:24Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://12b84365a46b5284f8fcbcb1ae8f90b4dd139fbc3175edbef9d6b00d9af1fab2\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-28T13:16:30Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.15\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.204\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.204\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-28T13:16:25Z\"\n }\n}\n" STEP: replace the image in the pod Aug 28 13:16:37.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-611' Aug 28 13:16:39.267: INFO: stderr: "" Aug 28 13:16:39.267: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Aug 28 13:16:39.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-611' Aug 28 13:16:47.690: INFO: stderr: "" Aug 28 13:16:47.690: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:16:47.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-611" for this suite. • [SLOW TEST:25.171 seconds] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":40,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:16:47.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:16:48.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1163" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":41,"skipped":671,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:16:48.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Aug 28 13:16:48.528: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:18:38.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5159" for this suite. • [SLOW TEST:110.399 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":42,"skipped":677,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:18:38.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 28 13:18:38.778: INFO: >>> kubeConfig: /root/.kube/config Aug 28 13:18:58.623: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:20:10.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-52" for this suite. • [SLOW TEST:92.274 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":43,"skipped":685,"failed":0} [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:20:10.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Aug 28 13:20:12.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1483' Aug 28 13:20:19.797: INFO: stderr: "" Aug 28 13:20:19.797: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 28 13:20:19.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1483' Aug 28 13:20:21.577: INFO: stderr: "" Aug 28 13:20:21.577: INFO: stdout: "update-demo-nautilus-2fl46 update-demo-nautilus-ktmpk " Aug 28 13:20:21.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fl46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:20:22.994: INFO: stderr: "" Aug 28 13:20:22.994: INFO: stdout: "" Aug 28 13:20:22.995: INFO: update-demo-nautilus-2fl46 is created but not running Aug 28 13:20:27.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1483' Aug 28 13:20:29.300: INFO: stderr: "" Aug 28 13:20:29.300: INFO: stdout: "update-demo-nautilus-2fl46 update-demo-nautilus-ktmpk " Aug 28 13:20:29.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fl46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:20:30.568: INFO: stderr: "" Aug 28 13:20:30.568: INFO: stdout: "true" Aug 28 13:20:30.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fl46 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:20:31.794: INFO: stderr: "" Aug 28 13:20:31.794: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 28 13:20:31.795: INFO: validating pod update-demo-nautilus-2fl46 Aug 28 13:20:31.801: INFO: got data: { "image": "nautilus.jpg" } Aug 28 13:20:31.802: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 28 13:20:31.802: INFO: update-demo-nautilus-2fl46 is verified up and running Aug 28 13:20:31.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktmpk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:20:33.340: INFO: stderr: "" Aug 28 13:20:33.340: INFO: stdout: "true" Aug 28 13:20:33.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktmpk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:20:34.630: INFO: stderr: "" Aug 28 13:20:34.630: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 28 13:20:34.630: INFO: validating pod update-demo-nautilus-ktmpk Aug 28 13:20:34.635: INFO: got data: { "image": "nautilus.jpg" } Aug 28 13:20:34.635: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 28 13:20:34.635: INFO: update-demo-nautilus-ktmpk is verified up and running STEP: scaling down the replication controller Aug 28 13:20:34.649: INFO: scanned /root for discovery docs: Aug 28 13:20:34.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1483' Aug 28 13:20:37.632: INFO: stderr: "" Aug 28 13:20:37.632: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 28 13:20:37.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1483' Aug 28 13:20:39.318: INFO: stderr: "" Aug 28 13:20:39.318: INFO: stdout: "update-demo-nautilus-2fl46 update-demo-nautilus-ktmpk " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 28 13:20:44.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1483' Aug 28 13:20:45.640: INFO: stderr: "" Aug 28 13:20:45.640: INFO: stdout: "update-demo-nautilus-2fl46 update-demo-nautilus-ktmpk " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 28 13:20:50.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1483' Aug 28 13:20:51.879: INFO: stderr: "" Aug 28 13:20:51.880: INFO: stdout: "update-demo-nautilus-ktmpk " Aug 28 13:20:51.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktmpk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:20:53.113: INFO: stderr: "" Aug 28 13:20:53.113: INFO: stdout: "true" Aug 28 13:20:53.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktmpk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:20:54.574: INFO: stderr: "" Aug 28 13:20:54.574: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 28 13:20:54.574: INFO: validating pod update-demo-nautilus-ktmpk Aug 28 13:20:54.580: INFO: got data: { "image": "nautilus.jpg" } Aug 28 13:20:54.580: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 28 13:20:54.580: INFO: update-demo-nautilus-ktmpk is verified up and running STEP: scaling up the replication controller Aug 28 13:20:54.588: INFO: scanned /root for discovery docs: Aug 28 13:20:54.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1483' Aug 28 13:20:55.896: INFO: stderr: "" Aug 28 13:20:55.896: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 28 13:20:55.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1483' Aug 28 13:20:57.442: INFO: stderr: "" Aug 28 13:20:57.442: INFO: stdout: "update-demo-nautilus-ksx2h update-demo-nautilus-ktmpk " Aug 28 13:20:57.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksx2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:20:58.962: INFO: stderr: "" Aug 28 13:20:58.963: INFO: stdout: "" Aug 28 13:20:58.963: INFO: update-demo-nautilus-ksx2h is created but not running Aug 28 13:21:03.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1483' Aug 28 13:21:05.277: INFO: stderr: "" Aug 28 13:21:05.277: INFO: stdout: "update-demo-nautilus-ksx2h update-demo-nautilus-ktmpk " Aug 28 13:21:05.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksx2h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:21:06.579: INFO: stderr: "" Aug 28 13:21:06.579: INFO: stdout: "true" Aug 28 13:21:06.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksx2h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:21:07.860: INFO: stderr: "" Aug 28 13:21:07.860: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 28 13:21:07.860: INFO: validating pod update-demo-nautilus-ksx2h Aug 28 13:21:07.865: INFO: got data: { "image": "nautilus.jpg" } Aug 28 13:21:07.865: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 28 13:21:07.865: INFO: update-demo-nautilus-ksx2h is verified up and running Aug 28 13:21:07.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktmpk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:21:09.135: INFO: stderr: "" Aug 28 13:21:09.135: INFO: stdout: "true" Aug 28 13:21:09.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktmpk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1483' Aug 28 13:21:10.477: INFO: stderr: "" Aug 28 13:21:10.477: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 28 13:21:10.477: INFO: validating pod update-demo-nautilus-ktmpk Aug 28 13:21:10.482: INFO: got data: { "image": "nautilus.jpg" } Aug 28 13:21:10.482: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 28 13:21:10.482: INFO: update-demo-nautilus-ktmpk is verified up and running STEP: using delete to clean up resources Aug 28 13:21:10.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1483' Aug 28 13:21:11.990: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 28 13:21:11.990: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 28 13:21:11.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1483' Aug 28 13:21:13.795: INFO: stderr: "No resources found in kubectl-1483 namespace.\n" Aug 28 13:21:13.796: INFO: stdout: "" Aug 28 13:21:13.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1483 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 28 13:21:15.090: INFO: stderr: "" Aug 28 13:21:15.090: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:21:15.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1483" for this suite. • [SLOW TEST:64.467 seconds] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":44,"skipped":685,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:21:15.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 28 13:21:15.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd23ab9b-e5fe-4fb7-bc19-1c91755a917c" in namespace "projected-8683" to be "Succeeded or Failed" Aug 28 13:21:15.462: INFO: Pod "downwardapi-volume-fd23ab9b-e5fe-4fb7-bc19-1c91755a917c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.315777ms Aug 28 13:21:17.485: INFO: Pod "downwardapi-volume-fd23ab9b-e5fe-4fb7-bc19-1c91755a917c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03248734s Aug 28 13:21:19.493: INFO: Pod "downwardapi-volume-fd23ab9b-e5fe-4fb7-bc19-1c91755a917c": Phase="Running", Reason="", readiness=true. Elapsed: 4.040833859s Aug 28 13:21:21.501: INFO: Pod "downwardapi-volume-fd23ab9b-e5fe-4fb7-bc19-1c91755a917c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048868659s STEP: Saw pod success Aug 28 13:21:21.501: INFO: Pod "downwardapi-volume-fd23ab9b-e5fe-4fb7-bc19-1c91755a917c" satisfied condition "Succeeded or Failed" Aug 28 13:21:21.506: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-fd23ab9b-e5fe-4fb7-bc19-1c91755a917c container client-container: STEP: delete the pod Aug 28 13:21:21.585: INFO: Waiting for pod downwardapi-volume-fd23ab9b-e5fe-4fb7-bc19-1c91755a917c to disappear Aug 28 13:21:21.595: INFO: Pod downwardapi-volume-fd23ab9b-e5fe-4fb7-bc19-1c91755a917c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:21:21.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8683" for this suite. • [SLOW TEST:6.419 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":701,"failed":0} [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:21:21.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:21:25.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3698" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":701,"failed":0} S ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:21:25.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:21:25.896: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-660558dc-7f7a-45fb-bec1-36a1e7ebdda5" in namespace "security-context-test-104" to be "Succeeded or Failed" Aug 28 13:21:25.935: INFO: Pod "alpine-nnp-false-660558dc-7f7a-45fb-bec1-36a1e7ebdda5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.494996ms Aug 28 13:21:28.127: INFO: Pod "alpine-nnp-false-660558dc-7f7a-45fb-bec1-36a1e7ebdda5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230440709s Aug 28 13:21:30.438: INFO: Pod "alpine-nnp-false-660558dc-7f7a-45fb-bec1-36a1e7ebdda5": Phase="Running", Reason="", readiness=true. Elapsed: 4.54182441s Aug 28 13:21:32.446: INFO: Pod "alpine-nnp-false-660558dc-7f7a-45fb-bec1-36a1e7ebdda5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.549522291s Aug 28 13:21:32.446: INFO: Pod "alpine-nnp-false-660558dc-7f7a-45fb-bec1-36a1e7ebdda5" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:21:32.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-104" for this suite. • [SLOW TEST:6.744 seconds] [k8s.io] Security Context /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":702,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:21:32.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-0e5806b6-3a90-4d2e-9dca-13ef104c1830 STEP: Creating a pod to test consume secrets Aug 28 13:21:32.930: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3ac2dd05-5fdf-4f5f-b5a6-4502015173b8" in namespace "projected-989" to be "Succeeded or Failed" Aug 28 13:21:32.996: INFO: Pod "pod-projected-secrets-3ac2dd05-5fdf-4f5f-b5a6-4502015173b8": Phase="Pending", Reason="", readiness=false. Elapsed: 65.583816ms Aug 28 13:21:35.025: INFO: Pod "pod-projected-secrets-3ac2dd05-5fdf-4f5f-b5a6-4502015173b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094939332s Aug 28 13:21:37.032: INFO: Pod "pod-projected-secrets-3ac2dd05-5fdf-4f5f-b5a6-4502015173b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101872955s STEP: Saw pod success Aug 28 13:21:37.032: INFO: Pod "pod-projected-secrets-3ac2dd05-5fdf-4f5f-b5a6-4502015173b8" satisfied condition "Succeeded or Failed" Aug 28 13:21:37.038: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-3ac2dd05-5fdf-4f5f-b5a6-4502015173b8 container projected-secret-volume-test: STEP: delete the pod Aug 28 13:21:37.212: INFO: Waiting for pod pod-projected-secrets-3ac2dd05-5fdf-4f5f-b5a6-4502015173b8 to disappear Aug 28 13:21:37.230: INFO: Pod pod-projected-secrets-3ac2dd05-5fdf-4f5f-b5a6-4502015173b8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:21:37.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-989" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":714,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:21:37.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-24d35d52-5d2f-4762-87b3-5efac6af735b STEP: Creating a pod to test consume configMaps Aug 28 13:21:37.404: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9a920cc-6718-4904-9c86-3382c2e28867" in namespace "configmap-5247" to be "Succeeded or Failed" Aug 28 13:21:37.510: INFO: Pod "pod-configmaps-b9a920cc-6718-4904-9c86-3382c2e28867": Phase="Pending", Reason="", readiness=false. Elapsed: 106.060071ms Aug 28 13:21:39.518: INFO: Pod "pod-configmaps-b9a920cc-6718-4904-9c86-3382c2e28867": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113560207s Aug 28 13:21:41.527: INFO: Pod "pod-configmaps-b9a920cc-6718-4904-9c86-3382c2e28867": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123058965s Aug 28 13:21:43.598: INFO: Pod "pod-configmaps-b9a920cc-6718-4904-9c86-3382c2e28867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.194427943s STEP: Saw pod success Aug 28 13:21:43.599: INFO: Pod "pod-configmaps-b9a920cc-6718-4904-9c86-3382c2e28867" satisfied condition "Succeeded or Failed" Aug 28 13:21:43.853: INFO: Trying to get logs from node kali-worker pod pod-configmaps-b9a920cc-6718-4904-9c86-3382c2e28867 container configmap-volume-test: STEP: delete the pod Aug 28 13:21:43.904: INFO: Waiting for pod pod-configmaps-b9a920cc-6718-4904-9c86-3382c2e28867 to disappear Aug 28 13:21:43.949: INFO: Pod pod-configmaps-b9a920cc-6718-4904-9c86-3382c2e28867 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:21:43.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5247" for this suite. • [SLOW TEST:7.404 seconds] [sig-storage] ConfigMap /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":729,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:21:44.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-177/secret-test-5ec294e1-a401-4edd-bfc1-72d240d20d11 STEP: Creating a pod to test consume secrets Aug 28 13:21:45.255: INFO: Waiting up to 5m0s for pod "pod-configmaps-80c08b54-a10c-48d8-9073-0485a00e97fe" in namespace "secrets-177" to be "Succeeded or Failed" Aug 28 13:21:45.333: INFO: Pod "pod-configmaps-80c08b54-a10c-48d8-9073-0485a00e97fe": Phase="Pending", Reason="", readiness=false. Elapsed: 78.34944ms Aug 28 13:21:47.834: INFO: Pod "pod-configmaps-80c08b54-a10c-48d8-9073-0485a00e97fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.578630663s Aug 28 13:21:49.841: INFO: Pod "pod-configmaps-80c08b54-a10c-48d8-9073-0485a00e97fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.585991774s STEP: Saw pod success Aug 28 13:21:49.841: INFO: Pod "pod-configmaps-80c08b54-a10c-48d8-9073-0485a00e97fe" satisfied condition "Succeeded or Failed" Aug 28 13:21:49.848: INFO: Trying to get logs from node kali-worker pod pod-configmaps-80c08b54-a10c-48d8-9073-0485a00e97fe container env-test: STEP: delete the pod Aug 28 13:21:49.906: INFO: Waiting for pod pod-configmaps-80c08b54-a10c-48d8-9073-0485a00e97fe to disappear Aug 28 13:21:49.946: INFO: Pod pod-configmaps-80c08b54-a10c-48d8-9073-0485a00e97fe no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:21:49.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-177" for this suite. • [SLOW TEST:5.298 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":736,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:21:49.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:21:50.049: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 28 13:22:09.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8404 create -f -' Aug 28 13:22:14.397: INFO: stderr: "" Aug 28 13:22:14.397: INFO: stdout: "e2e-test-crd-publish-openapi-1410-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 28 13:22:14.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8404 delete e2e-test-crd-publish-openapi-1410-crds test-cr' Aug 28 13:22:15.717: INFO: stderr: "" Aug 28 13:22:15.717: INFO: stdout: "e2e-test-crd-publish-openapi-1410-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 28 13:22:15.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8404 apply -f -' Aug 28 13:22:17.324: INFO: stderr: "" Aug 28 13:22:17.324: INFO: stdout: "e2e-test-crd-publish-openapi-1410-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 28 13:22:17.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8404 delete e2e-test-crd-publish-openapi-1410-crds test-cr' Aug 28 13:22:18.600: INFO: stderr: "" Aug 28 13:22:18.600: INFO: stdout: "e2e-test-crd-publish-openapi-1410-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 28 13:22:18.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1410-crds' Aug 28 13:22:20.127: INFO: stderr: "" Aug 28 13:22:20.128: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1410-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:22:30.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8404" for this suite. • [SLOW TEST:40.325 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":51,"skipped":757,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:22:30.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-d128c55e-7bf0-4d9a-b291-bf02cb9c5bba STEP: Creating a pod to test consume secrets Aug 28 13:22:30.374: INFO: Waiting up to 5m0s for pod "pod-secrets-e97c2b7b-4f02-4683-ba3e-817c39534628" in namespace "secrets-1609" to be "Succeeded or Failed" Aug 28 13:22:30.414: INFO: Pod "pod-secrets-e97c2b7b-4f02-4683-ba3e-817c39534628": Phase="Pending", Reason="", readiness=false. Elapsed: 39.790576ms Aug 28 13:22:32.420: INFO: Pod "pod-secrets-e97c2b7b-4f02-4683-ba3e-817c39534628": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045930576s Aug 28 13:22:34.426: INFO: Pod "pod-secrets-e97c2b7b-4f02-4683-ba3e-817c39534628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051874369s STEP: Saw pod success Aug 28 13:22:34.426: INFO: Pod "pod-secrets-e97c2b7b-4f02-4683-ba3e-817c39534628" satisfied condition "Succeeded or Failed" Aug 28 13:22:34.431: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-e97c2b7b-4f02-4683-ba3e-817c39534628 container secret-volume-test: STEP: delete the pod Aug 28 13:22:34.473: INFO: Waiting for pod pod-secrets-e97c2b7b-4f02-4683-ba3e-817c39534628 to disappear Aug 28 13:22:34.491: INFO: Pod pod-secrets-e97c2b7b-4f02-4683-ba3e-817c39534628 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 28 13:22:34.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1609" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":761,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 28 13:22:34.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 28 13:22:34.853: INFO: (0) /api/v1/nodes/kali-worker2/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:22:35.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2942" for this suite.
STEP: Destroying namespace "nspatchtest-d0f63fcd-d077-4628-8aa2-46cf02c13e04-1542" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":54,"skipped":806,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:22:35.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-a498b20e-10e9-4e03-b8e6-c46ba366939a
STEP: Creating configMap with name cm-test-opt-upd-dbb7d2ff-4628-4c1b-95e8-60da6169b7c3
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a498b20e-10e9-4e03-b8e6-c46ba366939a
STEP: Updating configmap cm-test-opt-upd-dbb7d2ff-4628-4c1b-95e8-60da6169b7c3
STEP: Creating configMap with name cm-test-opt-create-8cf17acd-7ee0-47ad-bd82-59ce1aef9420
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:22:43.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8019" for this suite.

• [SLOW TEST:8.342 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":839,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:22:43.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-b543e11e-7e88-4e23-bc48-e2a881668706
STEP: Creating a pod to test consume configMaps
Aug 28 13:22:43.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-0aaa8905-dcb3-47c0-ab74-98c957f7ab6a" in namespace "configmap-9841" to be "Succeeded or Failed"
Aug 28 13:22:43.690: INFO: Pod "pod-configmaps-0aaa8905-dcb3-47c0-ab74-98c957f7ab6a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.034231ms
Aug 28 13:22:45.698: INFO: Pod "pod-configmaps-0aaa8905-dcb3-47c0-ab74-98c957f7ab6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016943134s
Aug 28 13:22:47.704: INFO: Pod "pod-configmaps-0aaa8905-dcb3-47c0-ab74-98c957f7ab6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022972361s
Aug 28 13:22:49.743: INFO: Pod "pod-configmaps-0aaa8905-dcb3-47c0-ab74-98c957f7ab6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0619211s
STEP: Saw pod success
Aug 28 13:22:49.743: INFO: Pod "pod-configmaps-0aaa8905-dcb3-47c0-ab74-98c957f7ab6a" satisfied condition "Succeeded or Failed"
Aug 28 13:22:49.789: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-0aaa8905-dcb3-47c0-ab74-98c957f7ab6a container configmap-volume-test: 
STEP: delete the pod
Aug 28 13:22:50.014: INFO: Waiting for pod pod-configmaps-0aaa8905-dcb3-47c0-ab74-98c957f7ab6a to disappear
Aug 28 13:22:50.022: INFO: Pod pod-configmaps-0aaa8905-dcb3-47c0-ab74-98c957f7ab6a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:22:50.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9841" for this suite.

• [SLOW TEST:6.467 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":850,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:22:50.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5956
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-5956
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5956
Aug 28 13:22:50.711: INFO: Found 0 stateful pods, waiting for 1
Aug 28 13:23:00.720: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 28 13:23:00.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 13:23:02.364: INFO: stderr: "I0828 13:23:02.220468     901 log.go:172] (0x4000a24000) (0x40007f7360) Create stream\nI0828 13:23:02.225792     901 log.go:172] (0x4000a24000) (0x40007f7360) Stream added, broadcasting: 1\nI0828 13:23:02.238506     901 log.go:172] (0x4000a24000) Reply frame received for 1\nI0828 13:23:02.239144     901 log.go:172] (0x4000a24000) (0x40009cc140) Create stream\nI0828 13:23:02.239209     901 log.go:172] (0x4000a24000) (0x40009cc140) Stream added, broadcasting: 3\nI0828 13:23:02.240700     901 log.go:172] (0x4000a24000) Reply frame received for 3\nI0828 13:23:02.241003     901 log.go:172] (0x4000a24000) (0x40007f7540) Create stream\nI0828 13:23:02.241061     901 log.go:172] (0x4000a24000) (0x40007f7540) Stream added, broadcasting: 5\nI0828 13:23:02.242194     901 log.go:172] (0x4000a24000) Reply frame received for 5\nI0828 13:23:02.298096     901 log.go:172] (0x4000a24000) Data frame received for 5\nI0828 13:23:02.298351     901 log.go:172] (0x40007f7540) (5) Data frame handling\nI0828 13:23:02.298933     901 log.go:172] (0x40007f7540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 13:23:02.339604     901 log.go:172] (0x4000a24000) Data frame received for 3\nI0828 13:23:02.339792     901 log.go:172] (0x4000a24000) Data frame received for 5\nI0828 13:23:02.339895     901 log.go:172] (0x40007f7540) (5) Data frame handling\nI0828 13:23:02.339987     901 log.go:172] (0x40009cc140) (3) Data frame handling\nI0828 13:23:02.340118     901 log.go:172] (0x40009cc140) (3) Data frame sent\nI0828 13:23:02.340212     901 log.go:172] (0x4000a24000) Data frame received for 3\nI0828 13:23:02.340296     901 log.go:172] (0x40009cc140) (3) Data frame handling\nI0828 13:23:02.341925     901 log.go:172] (0x4000a24000) Data frame received for 1\nI0828 13:23:02.341988     901 log.go:172] (0x40007f7360) (1) Data frame handling\nI0828 13:23:02.342075     901 log.go:172] (0x40007f7360) (1) Data frame sent\nI0828 13:23:02.343265     901 log.go:172] (0x4000a24000) (0x40007f7360) Stream removed, broadcasting: 1\nI0828 13:23:02.347088     901 log.go:172] (0x4000a24000) Go away received\nI0828 13:23:02.350735     901 log.go:172] (0x4000a24000) (0x40007f7360) Stream removed, broadcasting: 1\nI0828 13:23:02.351066     901 log.go:172] (0x4000a24000) (0x40009cc140) Stream removed, broadcasting: 3\nI0828 13:23:02.351577     901 log.go:172] (0x4000a24000) (0x40007f7540) Stream removed, broadcasting: 5\n"
Aug 28 13:23:02.365: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 13:23:02.365: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 28 13:23:02.371: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 28 13:23:12.379: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 28 13:23:12.379: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 13:23:12.532: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 28 13:23:12.533: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:50 +0000 UTC  }]
Aug 28 13:23:12.533: INFO: 
Aug 28 13:23:12.533: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 28 13:23:13.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.862009532s
Aug 28 13:23:14.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.675315832s
Aug 28 13:23:16.095: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.66779631s
Aug 28 13:23:17.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.300925614s
Aug 28 13:23:18.386: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.065076273s
Aug 28 13:23:19.746: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.009774414s
Aug 28 13:23:21.348: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.64930302s
Aug 28 13:23:22.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 47.684039ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5956
Aug 28 13:23:23.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:23:25.591: INFO: stderr: "I0828 13:23:25.500414     922 log.go:172] (0x400003ad10) (0x40009d8000) Create stream\nI0828 13:23:25.502853     922 log.go:172] (0x400003ad10) (0x40009d8000) Stream added, broadcasting: 1\nI0828 13:23:25.510242     922 log.go:172] (0x400003ad10) Reply frame received for 1\nI0828 13:23:25.510734     922 log.go:172] (0x400003ad10) (0x40007e1220) Create stream\nI0828 13:23:25.510780     922 log.go:172] (0x400003ad10) (0x40007e1220) Stream added, broadcasting: 3\nI0828 13:23:25.511796     922 log.go:172] (0x400003ad10) Reply frame received for 3\nI0828 13:23:25.511979     922 log.go:172] (0x400003ad10) (0x40007e1400) Create stream\nI0828 13:23:25.512020     922 log.go:172] (0x400003ad10) (0x40007e1400) Stream added, broadcasting: 5\nI0828 13:23:25.512996     922 log.go:172] (0x400003ad10) Reply frame received for 5\nI0828 13:23:25.574959     922 log.go:172] (0x400003ad10) Data frame received for 3\nI0828 13:23:25.576308     922 log.go:172] (0x400003ad10) Data frame received for 1\nI0828 13:23:25.576482     922 log.go:172] (0x40007e1220) (3) Data frame handling\nI0828 13:23:25.577066     922 log.go:172] (0x400003ad10) Data frame received for 5\nI0828 13:23:25.577188     922 log.go:172] (0x40007e1400) (5) Data frame handling\nI0828 13:23:25.577781     922 log.go:172] (0x40007e1400) (5) Data frame sent\nI0828 13:23:25.577868     922 log.go:172] (0x40007e1220) (3) Data frame sent\nI0828 13:23:25.578166     922 log.go:172] (0x400003ad10) Data frame received for 5\nI0828 13:23:25.578512     922 log.go:172] (0x40007e1400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 13:23:25.579286     922 log.go:172] (0x40009d8000) (1) Data frame handling\nI0828 13:23:25.579369     922 log.go:172] (0x40009d8000) (1) Data frame sent\nI0828 13:23:25.579494     922 log.go:172] (0x400003ad10) Data frame received for 3\nI0828 13:23:25.579661     922 log.go:172] (0x40007e1220) (3) Data frame handling\nI0828 13:23:25.580376     922 log.go:172] (0x400003ad10) (0x40009d8000) Stream removed, broadcasting: 1\nI0828 13:23:25.581236     922 log.go:172] (0x400003ad10) Go away received\nI0828 13:23:25.584631     922 log.go:172] (0x400003ad10) (0x40009d8000) Stream removed, broadcasting: 1\nI0828 13:23:25.584937     922 log.go:172] (0x400003ad10) (0x40007e1220) Stream removed, broadcasting: 3\nI0828 13:23:25.585078     922 log.go:172] (0x400003ad10) (0x40007e1400) Stream removed, broadcasting: 5\n"
Aug 28 13:23:25.592: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 28 13:23:25.592: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 28 13:23:25.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:23:27.059: INFO: stderr: "I0828 13:23:26.957072     945 log.go:172] (0x40000e6420) (0x4000803680) Create stream\nI0828 13:23:26.960233     945 log.go:172] (0x40000e6420) (0x4000803680) Stream added, broadcasting: 1\nI0828 13:23:26.971841     945 log.go:172] (0x40000e6420) Reply frame received for 1\nI0828 13:23:26.972959     945 log.go:172] (0x40000e6420) (0x40009dc000) Create stream\nI0828 13:23:26.973047     945 log.go:172] (0x40000e6420) (0x40009dc000) Stream added, broadcasting: 3\nI0828 13:23:26.974307     945 log.go:172] (0x40000e6420) Reply frame received for 3\nI0828 13:23:26.974539     945 log.go:172] (0x40000e6420) (0x4000708000) Create stream\nI0828 13:23:26.974588     945 log.go:172] (0x40000e6420) (0x4000708000) Stream added, broadcasting: 5\nI0828 13:23:26.975683     945 log.go:172] (0x40000e6420) Reply frame received for 5\nI0828 13:23:27.035736     945 log.go:172] (0x40000e6420) Data frame received for 3\nI0828 13:23:27.036069     945 log.go:172] (0x40000e6420) Data frame received for 1\nI0828 13:23:27.036257     945 log.go:172] (0x40009dc000) (3) Data frame handling\nI0828 13:23:27.036511     945 log.go:172] (0x40000e6420) Data frame received for 5\nI0828 13:23:27.036567     945 log.go:172] (0x4000708000) (5) Data frame handling\nI0828 13:23:27.036653     945 log.go:172] (0x4000803680) (1) Data frame handling\nI0828 13:23:27.037635     945 log.go:172] (0x4000803680) (1) Data frame sent\nI0828 13:23:27.037694     945 log.go:172] (0x4000708000) (5) Data frame sent\nI0828 13:23:27.037777     945 log.go:172] (0x40009dc000) (3) Data frame sent\nI0828 13:23:27.038049     945 log.go:172] (0x40000e6420) Data frame received for 5\nI0828 13:23:27.038114     945 log.go:172] (0x4000708000) (5) Data frame handling\nI0828 13:23:27.041149     945 log.go:172] (0x40000e6420) Data frame received for 3\nI0828 13:23:27.041220     945 log.go:172] (0x40009dc000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0828 13:23:27.043244     945 log.go:172] (0x40000e6420) (0x4000803680) Stream removed, broadcasting: 1\nI0828 13:23:27.043688     945 log.go:172] (0x40000e6420) Go away received\nI0828 13:23:27.045605     945 log.go:172] (0x40000e6420) (0x4000803680) Stream removed, broadcasting: 1\nI0828 13:23:27.045800     945 log.go:172] (0x40000e6420) (0x40009dc000) Stream removed, broadcasting: 3\nI0828 13:23:27.045948     945 log.go:172] (0x40000e6420) (0x4000708000) Stream removed, broadcasting: 5\n"
Aug 28 13:23:27.059: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 28 13:23:27.059: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 28 13:23:27.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:23:28.651: INFO: stderr: "I0828 13:23:28.560772     968 log.go:172] (0x40006e8160) (0x40006e0280) Create stream\nI0828 13:23:28.564082     968 log.go:172] (0x40006e8160) (0x40006e0280) Stream added, broadcasting: 1\nI0828 13:23:28.574415     968 log.go:172] (0x40006e8160) Reply frame received for 1\nI0828 13:23:28.574973     968 log.go:172] (0x40006e8160) (0x4000752000) Create stream\nI0828 13:23:28.575033     968 log.go:172] (0x40006e8160) (0x4000752000) Stream added, broadcasting: 3\nI0828 13:23:28.576823     968 log.go:172] (0x40006e8160) Reply frame received for 3\nI0828 13:23:28.577513     968 log.go:172] (0x40006e8160) (0x40006e0320) Create stream\nI0828 13:23:28.577633     968 log.go:172] (0x40006e8160) (0x40006e0320) Stream added, broadcasting: 5\nI0828 13:23:28.578954     968 log.go:172] (0x40006e8160) Reply frame received for 5\nI0828 13:23:28.633974     968 log.go:172] (0x40006e8160) Data frame received for 5\nI0828 13:23:28.634364     968 log.go:172] (0x40006e0320) (5) Data frame handling\nI0828 13:23:28.634519     968 log.go:172] (0x40006e8160) Data frame received for 1\nI0828 13:23:28.634673     968 log.go:172] (0x40006e0280) (1) Data frame handling\nI0828 13:23:28.634887     968 log.go:172] (0x40006e8160) Data frame received for 3\nI0828 13:23:28.634996     968 log.go:172] (0x4000752000) (3) Data frame handling\nI0828 13:23:28.636516     968 log.go:172] (0x4000752000) (3) Data frame sent\nI0828 13:23:28.637070     968 log.go:172] (0x40006e8160) Data frame received for 3\nI0828 13:23:28.637181     968 log.go:172] (0x4000752000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0828 13:23:28.637242     968 log.go:172] (0x40006e0280) (1) Data frame sent\nI0828 13:23:28.638563     968 log.go:172] (0x40006e0320) (5) Data frame sent\nI0828 13:23:28.638619     968 log.go:172] (0x40006e8160) Data frame received for 5\nI0828 13:23:28.639124     968 log.go:172] (0x40006e8160) (0x40006e0280) Stream removed, broadcasting: 1\nI0828 13:23:28.639826     968 log.go:172] (0x40006e0320) (5) Data frame handling\nI0828 13:23:28.641798     968 log.go:172] (0x40006e8160) Go away received\nI0828 13:23:28.644588     968 log.go:172] (0x40006e8160) (0x40006e0280) Stream removed, broadcasting: 1\nI0828 13:23:28.644971     968 log.go:172] (0x40006e8160) (0x4000752000) Stream removed, broadcasting: 3\nI0828 13:23:28.645122     968 log.go:172] (0x40006e8160) (0x40006e0320) Stream removed, broadcasting: 5\n"
Aug 28 13:23:28.652: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 28 13:23:28.652: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 28 13:23:28.660: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 13:23:28.660: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 13:23:28.660: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 28 13:23:28.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 13:23:30.162: INFO: stderr: "I0828 13:23:30.073355     990 log.go:172] (0x400003a420) (0x4000944000) Create stream\nI0828 13:23:30.075606     990 log.go:172] (0x400003a420) (0x4000944000) Stream added, broadcasting: 1\nI0828 13:23:30.087183     990 log.go:172] (0x400003a420) Reply frame received for 1\nI0828 13:23:30.087843     990 log.go:172] (0x400003a420) (0x40009440a0) Create stream\nI0828 13:23:30.087905     990 log.go:172] (0x400003a420) (0x40009440a0) Stream added, broadcasting: 3\nI0828 13:23:30.089652     990 log.go:172] (0x400003a420) Reply frame received for 3\nI0828 13:23:30.089855     990 log.go:172] (0x400003a420) (0x4000706000) Create stream\nI0828 13:23:30.089900     990 log.go:172] (0x400003a420) (0x4000706000) Stream added, broadcasting: 5\nI0828 13:23:30.091212     990 log.go:172] (0x400003a420) Reply frame received for 5\nI0828 13:23:30.144676     990 log.go:172] (0x400003a420) Data frame received for 5\nI0828 13:23:30.144948     990 log.go:172] (0x4000706000) (5) Data frame handling\nI0828 13:23:30.145336     990 log.go:172] (0x4000706000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 13:23:30.146152     990 log.go:172] (0x400003a420) Data frame received for 3\nI0828 13:23:30.146282     990 log.go:172] (0x40009440a0) (3) Data frame handling\nI0828 13:23:30.146371     990 log.go:172] (0x40009440a0) (3) Data frame sent\nI0828 13:23:30.146438     990 log.go:172] (0x400003a420) Data frame received for 3\nI0828 13:23:30.146500     990 log.go:172] (0x40009440a0) (3) Data frame handling\nI0828 13:23:30.147726     990 log.go:172] (0x400003a420) Data frame received for 5\nI0828 13:23:30.147881     990 log.go:172] (0x4000706000) (5) Data frame handling\nI0828 13:23:30.148917     990 log.go:172] (0x400003a420) Data frame received for 1\nI0828 13:23:30.149007     990 log.go:172] (0x4000944000) (1) Data frame handling\nI0828 13:23:30.149081     990 log.go:172] (0x4000944000) (1) Data frame sent\nI0828 13:23:30.149406     990 log.go:172] (0x400003a420) (0x4000944000) Stream removed, broadcasting: 1\nI0828 13:23:30.150108     990 log.go:172] (0x400003a420) Go away received\nI0828 13:23:30.152400     990 log.go:172] (0x400003a420) (0x4000944000) Stream removed, broadcasting: 1\nI0828 13:23:30.152602     990 log.go:172] (0x400003a420) (0x40009440a0) Stream removed, broadcasting: 3\nI0828 13:23:30.152886     990 log.go:172] (0x400003a420) (0x4000706000) Stream removed, broadcasting: 5\n"
Aug 28 13:23:30.163: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 13:23:30.163: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 28 13:23:30.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 13:23:31.896: INFO: stderr: "I0828 13:23:31.727616    1012 log.go:172] (0x400003a2c0) (0x4000962000) Create stream\nI0828 13:23:31.730433    1012 log.go:172] (0x400003a2c0) (0x4000962000) Stream added, broadcasting: 1\nI0828 13:23:31.739619    1012 log.go:172] (0x400003a2c0) Reply frame received for 1\nI0828 13:23:31.740127    1012 log.go:172] (0x400003a2c0) (0x40009620a0) Create stream\nI0828 13:23:31.740175    1012 log.go:172] (0x400003a2c0) (0x40009620a0) Stream added, broadcasting: 3\nI0828 13:23:31.741336    1012 log.go:172] (0x400003a2c0) Reply frame received for 3\nI0828 13:23:31.741620    1012 log.go:172] (0x400003a2c0) (0x40007e9540) Create stream\nI0828 13:23:31.741682    1012 log.go:172] (0x400003a2c0) (0x40007e9540) Stream added, broadcasting: 5\nI0828 13:23:31.742656    1012 log.go:172] (0x400003a2c0) Reply frame received for 5\nI0828 13:23:31.796369    1012 log.go:172] (0x400003a2c0) Data frame received for 5\nI0828 13:23:31.796575    1012 log.go:172] (0x40007e9540) (5) Data frame handling\nI0828 13:23:31.796960    1012 log.go:172] (0x40007e9540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 13:23:31.877404    1012 log.go:172] (0x400003a2c0) Data frame received for 3\nI0828 13:23:31.877546    1012 log.go:172] (0x40009620a0) (3) Data frame handling\nI0828 13:23:31.877624    1012 log.go:172] (0x40009620a0) (3) Data frame sent\nI0828 13:23:31.877702    1012 log.go:172] (0x400003a2c0) Data frame received for 3\nI0828 13:23:31.877773    1012 log.go:172] (0x40009620a0) (3) Data frame handling\nI0828 13:23:31.877887    1012 log.go:172] (0x400003a2c0) Data frame received for 5\nI0828 13:23:31.878019    1012 log.go:172] (0x40007e9540) (5) Data frame handling\nI0828 13:23:31.878165    1012 log.go:172] (0x400003a2c0) Data frame received for 1\nI0828 13:23:31.878254    1012 log.go:172] (0x4000962000) (1) Data frame handling\nI0828 13:23:31.878324    1012 log.go:172] (0x4000962000) (1) Data frame sent\nI0828 13:23:31.879332    1012 log.go:172] (0x400003a2c0) (0x4000962000) Stream removed, broadcasting: 1\nI0828 13:23:31.882659    1012 log.go:172] (0x400003a2c0) Go away received\nI0828 13:23:31.884893    1012 log.go:172] (0x400003a2c0) (0x4000962000) Stream removed, broadcasting: 1\nI0828 13:23:31.885058    1012 log.go:172] (0x400003a2c0) (0x40009620a0) Stream removed, broadcasting: 3\nI0828 13:23:31.885196    1012 log.go:172] (0x400003a2c0) (0x40007e9540) Stream removed, broadcasting: 5\n"
Aug 28 13:23:31.897: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 13:23:31.897: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 28 13:23:31.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 13:23:34.628: INFO: stderr: "I0828 13:23:34.068417    1034 log.go:172] (0x4000b39340) (0x4000a44640) Create stream\nI0828 13:23:34.071379    1034 log.go:172] (0x4000b39340) (0x4000a44640) Stream added, broadcasting: 1\nI0828 13:23:34.086584    1034 log.go:172] (0x4000b39340) Reply frame received for 1\nI0828 13:23:34.087124    1034 log.go:172] (0x4000b39340) (0x40007df7c0) Create stream\nI0828 13:23:34.087177    1034 log.go:172] (0x4000b39340) (0x40007df7c0) Stream added, broadcasting: 3\nI0828 13:23:34.088083    1034 log.go:172] (0x4000b39340) Reply frame received for 3\nI0828 13:23:34.088290    1034 log.go:172] (0x4000b39340) (0x400074ebe0) Create stream\nI0828 13:23:34.088344    1034 log.go:172] (0x4000b39340) (0x400074ebe0) Stream added, broadcasting: 5\nI0828 13:23:34.089203    1034 log.go:172] (0x4000b39340) Reply frame received for 5\nI0828 13:23:34.150321    1034 log.go:172] (0x4000b39340) Data frame received for 5\nI0828 13:23:34.150554    1034 log.go:172] (0x400074ebe0) (5) Data frame handling\nI0828 13:23:34.151074    1034 log.go:172] (0x400074ebe0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 13:23:34.609606    1034 log.go:172] (0x4000b39340) Data frame received for 3\nI0828 13:23:34.609853    1034 log.go:172] (0x40007df7c0) (3) Data frame handling\nI0828 13:23:34.609957    1034 log.go:172] (0x40007df7c0) (3) Data frame sent\nI0828 13:23:34.610077    1034 log.go:172] (0x4000b39340) Data frame received for 5\nI0828 13:23:34.610238    1034 log.go:172] (0x400074ebe0) (5) Data frame handling\nI0828 13:23:34.610632    1034 log.go:172] (0x4000b39340) Data frame received for 3\nI0828 13:23:34.610781    1034 log.go:172] (0x40007df7c0) (3) Data frame handling\nI0828 13:23:34.610956    1034 log.go:172] (0x4000b39340) Data frame received for 1\nI0828 13:23:34.611116    1034 log.go:172] (0x4000a44640) (1) Data frame handling\nI0828 13:23:34.611270    1034 log.go:172] (0x4000a44640) (1) Data frame sent\nI0828 13:23:34.613407    1034 log.go:172] (0x4000b39340) (0x4000a44640) Stream removed, broadcasting: 1\nI0828 13:23:34.615278    1034 log.go:172] (0x4000b39340) Go away received\nI0828 13:23:34.618658    1034 log.go:172] (0x4000b39340) (0x4000a44640) Stream removed, broadcasting: 1\nI0828 13:23:34.618929    1034 log.go:172] (0x4000b39340) (0x40007df7c0) Stream removed, broadcasting: 3\nI0828 13:23:34.619175    1034 log.go:172] (0x4000b39340) (0x400074ebe0) Stream removed, broadcasting: 5\n"
Aug 28 13:23:34.629: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 13:23:34.629: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 28 13:23:34.629: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 13:23:34.734: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 28 13:23:44.760: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 28 13:23:44.760: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 28 13:23:44.760: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 28 13:23:45.124: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 28 13:23:45.124: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:50 +0000 UTC  }]
Aug 28 13:23:45.125: INFO: ss-1  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:45.125: INFO: ss-2  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:45.125: INFO: 
Aug 28 13:23:45.125: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 28 13:23:46.177: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 28 13:23:46.177: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:50 +0000 UTC  }]
Aug 28 13:23:46.178: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:46.179: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:46.179: INFO: 
Aug 28 13:23:46.179: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 28 13:23:48.082: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 28 13:23:48.082: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:50 +0000 UTC  }]
Aug 28 13:23:48.082: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:48.083: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:48.085: INFO: 
Aug 28 13:23:48.085: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 28 13:23:49.383: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 28 13:23:49.383: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:50 +0000 UTC  }]
Aug 28 13:23:49.383: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:49.383: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:49.383: INFO: 
Aug 28 13:23:49.384: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 28 13:23:50.906: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 28 13:23:50.906: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:50 +0000 UTC  }]
Aug 28 13:23:50.906: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:50.906: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:50.907: INFO: 
Aug 28 13:23:50.907: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 28 13:23:51.958: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 28 13:23:51.958: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:50 +0000 UTC  }]
Aug 28 13:23:51.959: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:51.959: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:51.960: INFO: 
Aug 28 13:23:51.960: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 28 13:23:53.282: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 28 13:23:53.282: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:50 +0000 UTC  }]
Aug 28 13:23:53.282: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:53.282: INFO: 
Aug 28 13:23:53.282: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 28 13:23:54.799: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 28 13:23:54.799: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:22:50 +0000 UTC  }]
Aug 28 13:23:54.800: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-28 13:23:12 +0000 UTC  }]
Aug 28 13:23:54.800: INFO: 
Aug 28 13:23:54.800: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5956
Aug 28 13:23:55.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:23:57.426: INFO: rc: 1
Aug 28 13:23:57.427: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 28 13:24:07.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:24:08.622: INFO: rc: 1
Aug 28 13:24:08.623: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:24:18.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:24:19.827: INFO: rc: 1
Aug 28 13:24:19.827: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:24:29.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:24:31.107: INFO: rc: 1
Aug 28 13:24:31.108: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:24:41.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:24:42.774: INFO: rc: 1
Aug 28 13:24:42.774: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:24:52.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:24:54.040: INFO: rc: 1
Aug 28 13:24:54.041: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:25:04.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:25:05.575: INFO: rc: 1
Aug 28 13:25:05.575: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:25:15.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:25:16.812: INFO: rc: 1
Aug 28 13:25:16.812: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:25:26.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:25:29.170: INFO: rc: 1
Aug 28 13:25:29.170: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:25:39.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:25:41.038: INFO: rc: 1
Aug 28 13:25:41.038: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:25:51.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:25:52.400: INFO: rc: 1
Aug 28 13:25:52.401: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:26:02.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:26:03.654: INFO: rc: 1
Aug 28 13:26:03.654: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:26:13.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:26:15.307: INFO: rc: 1
Aug 28 13:26:15.307: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:26:25.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:26:26.748: INFO: rc: 1
Aug 28 13:26:26.748: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:26:36.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:26:37.992: INFO: rc: 1
Aug 28 13:26:37.992: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:26:47.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:26:49.292: INFO: rc: 1
Aug 28 13:26:49.292: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:26:59.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:27:00.619: INFO: rc: 1
Aug 28 13:27:00.619: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:27:10.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:27:11.796: INFO: rc: 1
Aug 28 13:27:11.796: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:27:21.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:27:23.105: INFO: rc: 1
Aug 28 13:27:23.105: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:27:33.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:27:34.417: INFO: rc: 1
Aug 28 13:27:34.418: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:27:44.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:27:45.767: INFO: rc: 1
Aug 28 13:27:45.767: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:27:55.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:27:57.021: INFO: rc: 1
Aug 28 13:27:57.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:28:07.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:28:08.240: INFO: rc: 1
Aug 28 13:28:08.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:28:18.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:28:19.644: INFO: rc: 1
Aug 28 13:28:19.644: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:28:29.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:28:30.918: INFO: rc: 1
Aug 28 13:28:30.919: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:28:40.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:28:42.446: INFO: rc: 1
Aug 28 13:28:42.447: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:28:52.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:28:53.658: INFO: rc: 1
Aug 28 13:28:53.658: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 28 13:29:03.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5956 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 13:29:05.171: INFO: rc: 1
Aug 28 13:29:05.172: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Aug 28 13:29:05.172: INFO: Scaling statefulset ss to 0
Aug 28 13:29:05.639: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 28 13:29:05.644: INFO: Deleting all statefulset in ns statefulset-5956
Aug 28 13:29:05.648: INFO: Scaling statefulset ss to 0
Aug 28 13:29:05.662: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 13:29:05.666: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:29:05.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5956" for this suite.

• [SLOW TEST:375.892 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":57,"skipped":858,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:29:05.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-0dedc77c-2cf2-4fe1-aa66-e042245007f7
STEP: Creating a pod to test consume secrets
Aug 28 13:29:07.030: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b" in namespace "projected-8158" to be "Succeeded or Failed"
Aug 28 13:29:07.328: INFO: Pod "pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b": Phase="Pending", Reason="", readiness=false. Elapsed: 298.080446ms
Aug 28 13:29:09.334: INFO: Pod "pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303896847s
Aug 28 13:29:11.341: INFO: Pod "pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311546896s
Aug 28 13:29:13.521: INFO: Pod "pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.49087729s
Aug 28 13:29:15.548: INFO: Pod "pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.517961366s
STEP: Saw pod success
Aug 28 13:29:15.548: INFO: Pod "pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b" satisfied condition "Succeeded or Failed"
Aug 28 13:29:15.656: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b container projected-secret-volume-test: 
STEP: delete the pod
Aug 28 13:29:15.991: INFO: Waiting for pod pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b to disappear
Aug 28 13:29:15.997: INFO: Pod pod-projected-secrets-368e2a47-683e-4481-bf72-5bf2f98bb15b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:29:15.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8158" for this suite.

• [SLOW TEST:10.079 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":922,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:29:16.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:29:16.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 28 13:29:27.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7811 create -f -'
Aug 28 13:29:37.618: INFO: stderr: ""
Aug 28 13:29:37.618: INFO: stdout: "e2e-test-crd-publish-openapi-4603-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 28 13:29:37.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7811 delete e2e-test-crd-publish-openapi-4603-crds test-cr'
Aug 28 13:29:38.914: INFO: stderr: ""
Aug 28 13:29:38.914: INFO: stdout: "e2e-test-crd-publish-openapi-4603-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 28 13:29:38.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7811 apply -f -'
Aug 28 13:29:40.534: INFO: stderr: ""
Aug 28 13:29:40.534: INFO: stdout: "e2e-test-crd-publish-openapi-4603-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 28 13:29:40.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7811 delete e2e-test-crd-publish-openapi-4603-crds test-cr'
Aug 28 13:29:41.860: INFO: stderr: ""
Aug 28 13:29:41.861: INFO: stdout: "e2e-test-crd-publish-openapi-4603-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 28 13:29:41.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4603-crds'
Aug 28 13:29:43.511: INFO: stderr: ""
Aug 28 13:29:43.511: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4603-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:29:53.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7811" for this suite.

• [SLOW TEST:37.760 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":59,"skipped":925,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:29:53.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 28 13:29:54.007: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:30:07.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7089" for this suite.

• [SLOW TEST:13.447 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":60,"skipped":945,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:30:07.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-9ab140cf-948f-4de2-b481-607d3d85a590
STEP: Creating a pod to test consume configMaps
Aug 28 13:30:07.641: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd" in namespace "projected-8709" to be "Succeeded or Failed"
Aug 28 13:30:07.675: INFO: Pod "pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 33.997771ms
Aug 28 13:30:09.747: INFO: Pod "pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106089927s
Aug 28 13:30:11.753: INFO: Pod "pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112062318s
Aug 28 13:30:13.801: INFO: Pod "pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159762115s
Aug 28 13:30:16.097: INFO: Pod "pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455610569s
Aug 28 13:30:18.102: INFO: Pod "pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.460422549s
STEP: Saw pod success
Aug 28 13:30:18.102: INFO: Pod "pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd" satisfied condition "Succeeded or Failed"
Aug 28 13:30:18.108: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd container projected-configmap-volume-test: 
STEP: delete the pod
Aug 28 13:30:18.142: INFO: Waiting for pod pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd to disappear
Aug 28 13:30:18.464: INFO: Pod pod-projected-configmaps-7e785b0c-00e8-4e68-b939-3ff7165ea4bd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:30:18.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8709" for this suite.

• [SLOW TEST:11.291 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":947,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:30:18.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 13:30:26.013: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 13:30:28.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218226, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218226, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218226, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218225, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 13:30:30.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218226, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218226, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218226, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218225, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 13:30:32.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218226, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218226, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218226, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218225, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 13:30:35.192: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:30:35.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4798" for this suite.
STEP: Destroying namespace "webhook-4798-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.335 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":62,"skipped":952,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:30:35.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Aug 28 13:30:36.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8423'
Aug 28 13:30:38.631: INFO: stderr: ""
Aug 28 13:30:38.631: INFO: stdout: "pod/pause created\n"
Aug 28 13:30:38.632: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 28 13:30:38.632: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8423" to be "running and ready"
Aug 28 13:30:38.705: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 72.645932ms
Aug 28 13:30:40.825: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192494804s
Aug 28 13:30:43.204: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571117423s
Aug 28 13:30:45.389: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.756892398s
Aug 28 13:30:47.396: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.763916998s
Aug 28 13:30:47.397: INFO: Pod "pause" satisfied condition "running and ready"
Aug 28 13:30:47.397: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 28 13:30:47.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8423'
Aug 28 13:30:49.003: INFO: stderr: ""
Aug 28 13:30:49.003: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 28 13:30:49.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8423'
Aug 28 13:30:50.369: INFO: stderr: ""
Aug 28 13:30:50.369: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          12s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 28 13:30:50.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8423'
Aug 28 13:30:51.642: INFO: stderr: ""
Aug 28 13:30:51.642: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 28 13:30:51.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8423'
Aug 28 13:30:52.909: INFO: stderr: ""
Aug 28 13:30:52.909: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          14s   \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Aug 28 13:30:52.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8423'
Aug 28 13:30:54.486: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 28 13:30:54.486: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 28 13:30:54.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8423'
Aug 28 13:30:55.842: INFO: stderr: "No resources found in kubectl-8423 namespace.\n"
Aug 28 13:30:55.842: INFO: stdout: ""
Aug 28 13:30:55.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8423 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 28 13:30:57.325: INFO: stderr: ""
Aug 28 13:30:57.325: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:30:57.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8423" for this suite.

• [SLOW TEST:21.630 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":63,"skipped":972,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:30:57.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-b3255adf-e48a-4a3a-8e7e-65aec9e8f5a6
STEP: Creating a pod to test consume secrets
Aug 28 13:30:58.458: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d" in namespace "projected-3240" to be "Succeeded or Failed"
Aug 28 13:30:58.498: INFO: Pod "pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.244069ms
Aug 28 13:31:00.504: INFO: Pod "pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046387113s
Aug 28 13:31:02.755: INFO: Pod "pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296439893s
Aug 28 13:31:05.002: INFO: Pod "pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d": Phase="Running", Reason="", readiness=true. Elapsed: 6.544026868s
Aug 28 13:31:07.150: INFO: Pod "pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.692106089s
STEP: Saw pod success
Aug 28 13:31:07.151: INFO: Pod "pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d" satisfied condition "Succeeded or Failed"
Aug 28 13:31:07.156: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d container projected-secret-volume-test: 
STEP: delete the pod
Aug 28 13:31:07.223: INFO: Waiting for pod pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d to disappear
Aug 28 13:31:07.269: INFO: Pod pod-projected-secrets-63e02da8-7341-4d78-b970-22ac00f4a12d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:31:07.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3240" for this suite.

• [SLOW TEST:9.804 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":974,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:31:07.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 28 13:31:15.223: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:31:15.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2347" for this suite.

• [SLOW TEST:8.489 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":65,"skipped":981,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:31:15.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-6994, will wait for the garbage collector to delete the pods
Aug 28 13:31:26.231: INFO: Deleting Job.batch foo took: 17.933315ms
Aug 28 13:31:26.931: INFO: Terminating Job.batch foo pods took: 700.743609ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:32:00.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6994" for this suite.

• [SLOW TEST:44.903 seconds]
[sig-apps] Job
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":66,"skipped":989,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:32:00.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 13:32:00.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05e60f3f-6799-45a6-a79e-15f978d02232" in namespace "projected-8810" to be "Succeeded or Failed"
Aug 28 13:32:00.767: INFO: Pod "downwardapi-volume-05e60f3f-6799-45a6-a79e-15f978d02232": Phase="Pending", Reason="", readiness=false. Elapsed: 14.438306ms
Aug 28 13:32:02.791: INFO: Pod "downwardapi-volume-05e60f3f-6799-45a6-a79e-15f978d02232": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038554701s
Aug 28 13:32:04.840: INFO: Pod "downwardapi-volume-05e60f3f-6799-45a6-a79e-15f978d02232": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086684978s
Aug 28 13:32:06.849: INFO: Pod "downwardapi-volume-05e60f3f-6799-45a6-a79e-15f978d02232": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095803181s
STEP: Saw pod success
Aug 28 13:32:06.849: INFO: Pod "downwardapi-volume-05e60f3f-6799-45a6-a79e-15f978d02232" satisfied condition "Succeeded or Failed"
Aug 28 13:32:06.856: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-05e60f3f-6799-45a6-a79e-15f978d02232 container client-container: 
STEP: delete the pod
Aug 28 13:32:07.049: INFO: Waiting for pod downwardapi-volume-05e60f3f-6799-45a6-a79e-15f978d02232 to disappear
Aug 28 13:32:07.101: INFO: Pod downwardapi-volume-05e60f3f-6799-45a6-a79e-15f978d02232 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:32:07.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8810" for this suite.

• [SLOW TEST:6.434 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":993,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:32:07.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 28 13:32:07.188: INFO: Waiting up to 5m0s for pod "pod-3e144787-d644-47c9-b554-010f5631d44b" in namespace "emptydir-4560" to be "Succeeded or Failed"
Aug 28 13:32:07.234: INFO: Pod "pod-3e144787-d644-47c9-b554-010f5631d44b": Phase="Pending", Reason="", readiness=false. Elapsed: 46.508662ms
Aug 28 13:32:09.305: INFO: Pod "pod-3e144787-d644-47c9-b554-010f5631d44b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117502628s
Aug 28 13:32:11.313: INFO: Pod "pod-3e144787-d644-47c9-b554-010f5631d44b": Phase="Running", Reason="", readiness=true. Elapsed: 4.125214026s
Aug 28 13:32:13.321: INFO: Pod "pod-3e144787-d644-47c9-b554-010f5631d44b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13282513s
STEP: Saw pod success
Aug 28 13:32:13.321: INFO: Pod "pod-3e144787-d644-47c9-b554-010f5631d44b" satisfied condition "Succeeded or Failed"
Aug 28 13:32:13.327: INFO: Trying to get logs from node kali-worker pod pod-3e144787-d644-47c9-b554-010f5631d44b container test-container: 
STEP: delete the pod
Aug 28 13:32:13.395: INFO: Waiting for pod pod-3e144787-d644-47c9-b554-010f5631d44b to disappear
Aug 28 13:32:13.401: INFO: Pod pod-3e144787-d644-47c9-b554-010f5631d44b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:32:13.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4560" for this suite.

• [SLOW TEST:6.333 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":995,"failed":0}
SSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:32:13.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:32:37.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-895" for this suite.

• [SLOW TEST:24.200 seconds]
[sig-apps] Job
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":69,"skipped":999,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:32:37.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 28 13:32:37.787: INFO: Waiting up to 5m0s for pod "pod-5e204ae5-84d5-4071-aa6f-150254b20f7e" in namespace "emptydir-6466" to be "Succeeded or Failed"
Aug 28 13:32:37.793: INFO: Pod "pod-5e204ae5-84d5-4071-aa6f-150254b20f7e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.713386ms
Aug 28 13:32:39.800: INFO: Pod "pod-5e204ae5-84d5-4071-aa6f-150254b20f7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012751504s
Aug 28 13:32:41.808: INFO: Pod "pod-5e204ae5-84d5-4071-aa6f-150254b20f7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02047783s
Aug 28 13:32:43.882: INFO: Pod "pod-5e204ae5-84d5-4071-aa6f-150254b20f7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094337825s
STEP: Saw pod success
Aug 28 13:32:43.882: INFO: Pod "pod-5e204ae5-84d5-4071-aa6f-150254b20f7e" satisfied condition "Succeeded or Failed"
Aug 28 13:32:43.977: INFO: Trying to get logs from node kali-worker2 pod pod-5e204ae5-84d5-4071-aa6f-150254b20f7e container test-container: 
STEP: delete the pod
Aug 28 13:32:44.146: INFO: Waiting for pod pod-5e204ae5-84d5-4071-aa6f-150254b20f7e to disappear
Aug 28 13:32:44.157: INFO: Pod pod-5e204ae5-84d5-4071-aa6f-150254b20f7e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:32:44.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6466" for this suite.

• [SLOW TEST:6.525 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1008,"failed":0}
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:32:44.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:32:44.465: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 28 13:32:57.245: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2367 PodName:pod-sharedvolume-87366fd2-12a6-4bcc-9675-34ad37e985fc ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 13:32:57.245: INFO: >>> kubeConfig: /root/.kube/config
I0828 13:32:57.321403      11 log.go:172] (0x40033f0580) (0x4001e6eaa0) Create stream
I0828 13:32:57.321878      11 log.go:172] (0x40033f0580) (0x4001e6eaa0) Stream added, broadcasting: 1
I0828 13:32:57.338597      11 log.go:172] (0x40033f0580) Reply frame received for 1
I0828 13:32:57.339487      11 log.go:172] (0x40033f0580) (0x4001e6eb40) Create stream
I0828 13:32:57.339587      11 log.go:172] (0x40033f0580) (0x4001e6eb40) Stream added, broadcasting: 3
I0828 13:32:57.341275      11 log.go:172] (0x40033f0580) Reply frame received for 3
I0828 13:32:57.341588      11 log.go:172] (0x40033f0580) (0x400217e000) Create stream
I0828 13:32:57.341661      11 log.go:172] (0x40033f0580) (0x400217e000) Stream added, broadcasting: 5
I0828 13:32:57.343648      11 log.go:172] (0x40033f0580) Reply frame received for 5
I0828 13:32:57.433179      11 log.go:172] (0x40033f0580) Data frame received for 5
I0828 13:32:57.434925      11 log.go:172] (0x40033f0580) Data frame received for 3
I0828 13:32:57.435347      11 log.go:172] (0x40033f0580) Data frame received for 1
I0828 13:32:57.435638      11 log.go:172] (0x4001e6eaa0) (1) Data frame handling
I0828 13:32:57.437221      11 log.go:172] (0x4001e6eb40) (3) Data frame handling
I0828 13:32:57.437495      11 log.go:172] (0x400217e000) (5) Data frame handling
I0828 13:32:57.439743      11 log.go:172] (0x4001e6eb40) (3) Data frame sent
I0828 13:32:57.440001      11 log.go:172] (0x4001e6eaa0) (1) Data frame sent
I0828 13:32:57.440176      11 log.go:172] (0x40033f0580) Data frame received for 3
I0828 13:32:57.440255      11 log.go:172] (0x4001e6eb40) (3) Data frame handling
I0828 13:32:57.440819      11 log.go:172] (0x40033f0580) (0x4001e6eaa0) Stream removed, broadcasting: 1
I0828 13:32:57.441496      11 log.go:172] (0x40033f0580) Go away received
I0828 13:32:57.443724      11 log.go:172] (0x40033f0580) (0x4001e6eaa0) Stream removed, broadcasting: 1
I0828 13:32:57.444056      11 log.go:172] (0x40033f0580) (0x4001e6eb40) Stream removed, broadcasting: 3
I0828 13:32:57.444275      11 log.go:172] (0x40033f0580) (0x400217e000) Stream removed, broadcasting: 5
Aug 28 13:32:57.444: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:32:57.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2367" for this suite.

• [SLOW TEST:12.458 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":72,"skipped":1017,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:32:57.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 28 13:32:59.734: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 28 13:33:01.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218379, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218379, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218379, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218379, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 13:33:04.953: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:33:05.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:33:07.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4697" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:10.179 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":73,"skipped":1029,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:33:07.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-7c762342-3b32-44a7-81ab-0353e4dd50c2 in namespace container-probe-3753
Aug 28 13:33:11.793: INFO: Started pod test-webserver-7c762342-3b32-44a7-81ab-0353e4dd50c2 in namespace container-probe-3753
STEP: checking the pod's current state and verifying that restartCount is present
Aug 28 13:33:11.798: INFO: Initial restart count of pod test-webserver-7c762342-3b32-44a7-81ab-0353e4dd50c2 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:37:12.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3753" for this suite.

• [SLOW TEST:245.176 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1058,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:37:12.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:37:52.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-492" for this suite.

• [SLOW TEST:39.612 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1064,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:37:52.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:38:09.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1902" for this suite.

• [SLOW TEST:17.440 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":76,"skipped":1075,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:38:09.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 28 13:38:10.335: INFO: Waiting up to 5m0s for pod "pod-bf4c9b7c-18e6-4643-9882-b76140f9822e" in namespace "emptydir-573" to be "Succeeded or Failed"
Aug 28 13:38:10.346: INFO: Pod "pod-bf4c9b7c-18e6-4643-9882-b76140f9822e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.315695ms
Aug 28 13:38:12.353: INFO: Pod "pod-bf4c9b7c-18e6-4643-9882-b76140f9822e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017545674s
Aug 28 13:38:14.410: INFO: Pod "pod-bf4c9b7c-18e6-4643-9882-b76140f9822e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074487884s
STEP: Saw pod success
Aug 28 13:38:14.410: INFO: Pod "pod-bf4c9b7c-18e6-4643-9882-b76140f9822e" satisfied condition "Succeeded or Failed"
Aug 28 13:38:14.415: INFO: Trying to get logs from node kali-worker pod pod-bf4c9b7c-18e6-4643-9882-b76140f9822e container test-container: 
STEP: delete the pod
Aug 28 13:38:14.566: INFO: Waiting for pod pod-bf4c9b7c-18e6-4643-9882-b76140f9822e to disappear
Aug 28 13:38:14.576: INFO: Pod pod-bf4c9b7c-18e6-4643-9882-b76140f9822e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:38:14.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-573" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1085,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:38:14.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-ab6f3d70-f5c1-4d13-a882-7b09bd518a41
STEP: Creating a pod to test consume secrets
Aug 28 13:38:14.694: INFO: Waiting up to 5m0s for pod "pod-secrets-f5272198-c30f-4f90-a591-c1a638c9b8d8" in namespace "secrets-3604" to be "Succeeded or Failed"
Aug 28 13:38:14.704: INFO: Pod "pod-secrets-f5272198-c30f-4f90-a591-c1a638c9b8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.692712ms
Aug 28 13:38:16.713: INFO: Pod "pod-secrets-f5272198-c30f-4f90-a591-c1a638c9b8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019265341s
Aug 28 13:38:18.722: INFO: Pod "pod-secrets-f5272198-c30f-4f90-a591-c1a638c9b8d8": Phase="Running", Reason="", readiness=true. Elapsed: 4.028079243s
Aug 28 13:38:20.730: INFO: Pod "pod-secrets-f5272198-c30f-4f90-a591-c1a638c9b8d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03583356s
STEP: Saw pod success
Aug 28 13:38:20.730: INFO: Pod "pod-secrets-f5272198-c30f-4f90-a591-c1a638c9b8d8" satisfied condition "Succeeded or Failed"
Aug 28 13:38:20.736: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-f5272198-c30f-4f90-a591-c1a638c9b8d8 container secret-volume-test: 
STEP: delete the pod
Aug 28 13:38:20.807: INFO: Waiting for pod pod-secrets-f5272198-c30f-4f90-a591-c1a638c9b8d8 to disappear
Aug 28 13:38:20.818: INFO: Pod pod-secrets-f5272198-c30f-4f90-a591-c1a638c9b8d8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:38:20.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3604" for this suite.

• [SLOW TEST:6.245 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1097,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:38:20.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:38:32.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7069" for this suite.

• [SLOW TEST:11.191 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":79,"skipped":1108,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:38:32.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 28 13:38:32.885: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 28 13:38:32.908: INFO: Waiting for terminating namespaces to be deleted...
Aug 28 13:38:32.913: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 28 13:38:32.927: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 28 13:38:32.927: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 13:38:32.927: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 28 13:38:32.927: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 13:38:32.927: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 28 13:38:32.927: INFO: 	Container app ready: true, restart count 0
Aug 28 13:38:32.927: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 28 13:38:32.977: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 28 13:38:32.977: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 13:38:32.977: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 28 13:38:32.977: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 13:38:32.977: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 28 13:38:32.977: INFO: 	Container app ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162f7228ff8e4f26], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:38:34.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1756" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":80,"skipped":1122,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:38:34.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-1bd21deb-8774-4c33-b1f3-be466ba78494
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:38:34.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3664" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":81,"skipped":1162,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:38:34.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-7839
STEP: creating replication controller nodeport-test in namespace services-7839
I0828 13:38:34.593171      11 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7839, replica count: 2
I0828 13:38:37.646576      11 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 13:38:40.648533      11 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 28 13:38:40.649: INFO: Creating new exec pod
Aug 28 13:38:45.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-7839 execpodb8qr4 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 28 13:38:47.252: INFO: stderr: "I0828 13:38:47.137675    1990 log.go:172] (0x40000ee420) (0x4000720140) Create stream\nI0828 13:38:47.140030    1990 log.go:172] (0x40000ee420) (0x4000720140) Stream added, broadcasting: 1\nI0828 13:38:47.149630    1990 log.go:172] (0x40000ee420) Reply frame received for 1\nI0828 13:38:47.150218    1990 log.go:172] (0x40000ee420) (0x40005bcbe0) Create stream\nI0828 13:38:47.150285    1990 log.go:172] (0x40000ee420) (0x40005bcbe0) Stream added, broadcasting: 3\nI0828 13:38:47.152003    1990 log.go:172] (0x40000ee420) Reply frame received for 3\nI0828 13:38:47.152531    1990 log.go:172] (0x40000ee420) (0x40007201e0) Create stream\nI0828 13:38:47.152632    1990 log.go:172] (0x40000ee420) (0x40007201e0) Stream added, broadcasting: 5\nI0828 13:38:47.154499    1990 log.go:172] (0x40000ee420) Reply frame received for 5\nI0828 13:38:47.231444    1990 log.go:172] (0x40000ee420) Data frame received for 5\nI0828 13:38:47.231878    1990 log.go:172] (0x40007201e0) (5) Data frame handling\nI0828 13:38:47.232146    1990 log.go:172] (0x40000ee420) Data frame received for 3\nI0828 13:38:47.232227    1990 log.go:172] (0x40005bcbe0) (3) Data frame handling\nI0828 13:38:47.232842    1990 log.go:172] (0x40000ee420) Data frame received for 1\nI0828 13:38:47.232995    1990 log.go:172] (0x4000720140) (1) Data frame handling\nI0828 13:38:47.233899    1990 log.go:172] (0x4000720140) (1) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0828 13:38:47.234339    1990 log.go:172] (0x40007201e0) (5) Data frame sent\nI0828 13:38:47.235131    1990 log.go:172] (0x40000ee420) Data frame received for 5\nI0828 13:38:47.235219    1990 log.go:172] (0x40007201e0) (5) Data frame handling\nI0828 13:38:47.235309    1990 log.go:172] (0x40007201e0) (5) Data frame sent\nI0828 13:38:47.235395    1990 log.go:172] (0x40000ee420) Data frame received for 5\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0828 13:38:47.235498    1990 log.go:172] (0x40007201e0) (5) Data frame handling\nI0828 13:38:47.236428    1990 log.go:172] (0x40000ee420) (0x4000720140) Stream removed, broadcasting: 1\nI0828 13:38:47.237904    1990 log.go:172] (0x40000ee420) Go away received\nI0828 13:38:47.241006    1990 log.go:172] (0x40000ee420) (0x4000720140) Stream removed, broadcasting: 1\nI0828 13:38:47.241331    1990 log.go:172] (0x40000ee420) (0x40005bcbe0) Stream removed, broadcasting: 3\nI0828 13:38:47.241557    1990 log.go:172] (0x40000ee420) (0x40007201e0) Stream removed, broadcasting: 5\n"
Aug 28 13:38:47.253: INFO: stdout: ""
Aug 28 13:38:47.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-7839 execpodb8qr4 -- /bin/sh -x -c nc -zv -t -w 2 10.105.55.201 80'
Aug 28 13:38:48.724: INFO: stderr: "I0828 13:38:48.606255    2012 log.go:172] (0x400003a210) (0x40008072c0) Create stream\nI0828 13:38:48.608710    2012 log.go:172] (0x400003a210) (0x40008072c0) Stream added, broadcasting: 1\nI0828 13:38:48.621562    2012 log.go:172] (0x400003a210) Reply frame received for 1\nI0828 13:38:48.623023    2012 log.go:172] (0x400003a210) (0x40008074a0) Create stream\nI0828 13:38:48.623140    2012 log.go:172] (0x400003a210) (0x40008074a0) Stream added, broadcasting: 3\nI0828 13:38:48.625205    2012 log.go:172] (0x400003a210) Reply frame received for 3\nI0828 13:38:48.625689    2012 log.go:172] (0x400003a210) (0x40006ee000) Create stream\nI0828 13:38:48.625790    2012 log.go:172] (0x400003a210) (0x40006ee000) Stream added, broadcasting: 5\nI0828 13:38:48.627239    2012 log.go:172] (0x400003a210) Reply frame received for 5\nI0828 13:38:48.705594    2012 log.go:172] (0x400003a210) Data frame received for 5\nI0828 13:38:48.705941    2012 log.go:172] (0x400003a210) Data frame received for 1\nI0828 13:38:48.706115    2012 log.go:172] (0x40008072c0) (1) Data frame handling\nI0828 13:38:48.706324    2012 log.go:172] (0x40006ee000) (5) Data frame handling\nI0828 13:38:48.706584    2012 log.go:172] (0x400003a210) Data frame received for 3\nI0828 13:38:48.706708    2012 log.go:172] (0x40008074a0) (3) Data frame handling\nI0828 13:38:48.707477    2012 log.go:172] (0x40008072c0) (1) Data frame sent\nI0828 13:38:48.707767    2012 log.go:172] (0x40006ee000) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.55.201 80\nConnection to 10.105.55.201 80 port [tcp/http] succeeded!\nI0828 13:38:48.709142    2012 log.go:172] (0x400003a210) Data frame received for 5\nI0828 13:38:48.709207    2012 log.go:172] (0x40006ee000) (5) Data frame handling\nI0828 13:38:48.711098    2012 log.go:172] (0x400003a210) (0x40008072c0) Stream removed, broadcasting: 1\nI0828 13:38:48.712667    2012 log.go:172] (0x400003a210) Go away received\nI0828 13:38:48.715793    2012 log.go:172] (0x400003a210) (0x40008072c0) Stream removed, broadcasting: 1\nI0828 13:38:48.716032    2012 log.go:172] (0x400003a210) (0x40008074a0) Stream removed, broadcasting: 3\nI0828 13:38:48.716222    2012 log.go:172] (0x400003a210) (0x40006ee000) Stream removed, broadcasting: 5\n"
Aug 28 13:38:48.725: INFO: stdout: ""
Aug 28 13:38:48.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-7839 execpodb8qr4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31845'
Aug 28 13:38:50.157: INFO: stderr: "I0828 13:38:50.031692    2035 log.go:172] (0x400003a630) (0x40008f81e0) Create stream\nI0828 13:38:50.038888    2035 log.go:172] (0x400003a630) (0x40008f81e0) Stream added, broadcasting: 1\nI0828 13:38:50.050014    2035 log.go:172] (0x400003a630) Reply frame received for 1\nI0828 13:38:50.050609    2035 log.go:172] (0x400003a630) (0x4000756000) Create stream\nI0828 13:38:50.050666    2035 log.go:172] (0x400003a630) (0x4000756000) Stream added, broadcasting: 3\nI0828 13:38:50.052281    2035 log.go:172] (0x400003a630) Reply frame received for 3\nI0828 13:38:50.052563    2035 log.go:172] (0x400003a630) (0x400067ea00) Create stream\nI0828 13:38:50.052633    2035 log.go:172] (0x400003a630) (0x400067ea00) Stream added, broadcasting: 5\nI0828 13:38:50.053846    2035 log.go:172] (0x400003a630) Reply frame received for 5\nI0828 13:38:50.137706    2035 log.go:172] (0x400003a630) Data frame received for 5\nI0828 13:38:50.137960    2035 log.go:172] (0x400067ea00) (5) Data frame handling\nI0828 13:38:50.138436    2035 log.go:172] (0x400067ea00) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 31845\nI0828 13:38:50.138917    2035 log.go:172] (0x400003a630) Data frame received for 3\nI0828 13:38:50.139033    2035 log.go:172] (0x4000756000) (3) Data frame handling\nI0828 13:38:50.139507    2035 log.go:172] (0x400003a630) Data frame received for 5\nI0828 13:38:50.139601    2035 log.go:172] (0x400067ea00) (5) Data frame handling\nI0828 13:38:50.139675    2035 log.go:172] (0x400067ea00) (5) Data frame sent\nI0828 13:38:50.139741    2035 log.go:172] (0x400003a630) Data frame received for 5\nI0828 13:38:50.139800    2035 log.go:172] (0x400067ea00) (5) Data frame handling\nConnection to 172.18.0.15 31845 port [tcp/31845] succeeded!\nI0828 13:38:50.140144    2035 log.go:172] (0x400003a630) Data frame received for 1\nI0828 13:38:50.140248    2035 log.go:172] (0x40008f81e0) (1) Data frame handling\nI0828 13:38:50.140353    2035 log.go:172] (0x40008f81e0) (1) Data frame sent\nI0828 13:38:50.142082    2035 log.go:172] (0x400003a630) (0x40008f81e0) Stream removed, broadcasting: 1\nI0828 13:38:50.144556    2035 log.go:172] (0x400003a630) Go away received\nI0828 13:38:50.146862    2035 log.go:172] (0x400003a630) (0x40008f81e0) Stream removed, broadcasting: 1\nI0828 13:38:50.147728    2035 log.go:172] (0x400003a630) (0x4000756000) Stream removed, broadcasting: 3\nI0828 13:38:50.147967    2035 log.go:172] (0x400003a630) (0x400067ea00) Stream removed, broadcasting: 5\n"
Aug 28 13:38:50.158: INFO: stdout: ""
Aug 28 13:38:50.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-7839 execpodb8qr4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31845'
Aug 28 13:38:51.615: INFO: stderr: "I0828 13:38:51.508117    2058 log.go:172] (0x400093e0b0) (0x4000a8a140) Create stream\nI0828 13:38:51.510671    2058 log.go:172] (0x400093e0b0) (0x4000a8a140) Stream added, broadcasting: 1\nI0828 13:38:51.522779    2058 log.go:172] (0x400093e0b0) Reply frame received for 1\nI0828 13:38:51.523669    2058 log.go:172] (0x400093e0b0) (0x40007d90e0) Create stream\nI0828 13:38:51.523763    2058 log.go:172] (0x400093e0b0) (0x40007d90e0) Stream added, broadcasting: 3\nI0828 13:38:51.525599    2058 log.go:172] (0x400093e0b0) Reply frame received for 3\nI0828 13:38:51.525841    2058 log.go:172] (0x400093e0b0) (0x4000ab8000) Create stream\nI0828 13:38:51.525897    2058 log.go:172] (0x400093e0b0) (0x4000ab8000) Stream added, broadcasting: 5\nI0828 13:38:51.527341    2058 log.go:172] (0x400093e0b0) Reply frame received for 5\nI0828 13:38:51.597820    2058 log.go:172] (0x400093e0b0) Data frame received for 3\nI0828 13:38:51.598114    2058 log.go:172] (0x40007d90e0) (3) Data frame handling\nI0828 13:38:51.598225    2058 log.go:172] (0x400093e0b0) Data frame received for 5\nI0828 13:38:51.598327    2058 log.go:172] (0x4000ab8000) (5) Data frame handling\nI0828 13:38:51.598410    2058 log.go:172] (0x400093e0b0) Data frame received for 1\nI0828 13:38:51.598496    2058 log.go:172] (0x4000a8a140) (1) Data frame handling\nI0828 13:38:51.599613    2058 log.go:172] (0x4000a8a140) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 31845\nConnection to 172.18.0.13 31845 port [tcp/31845] succeeded!\nI0828 13:38:51.600931    2058 log.go:172] (0x4000ab8000) (5) Data frame sent\nI0828 13:38:51.600990    2058 log.go:172] (0x400093e0b0) Data frame received for 5\nI0828 13:38:51.601037    2058 log.go:172] (0x4000ab8000) (5) Data frame handling\nI0828 13:38:51.602794    2058 log.go:172] (0x400093e0b0) (0x4000a8a140) Stream removed, broadcasting: 1\nI0828 13:38:51.603713    2058 log.go:172] (0x400093e0b0) Go away received\nI0828 13:38:51.606612    2058 log.go:172] (0x400093e0b0) (0x4000a8a140) Stream removed, broadcasting: 1\nI0828 13:38:51.606871    2058 log.go:172] (0x400093e0b0) (0x40007d90e0) Stream removed, broadcasting: 3\nI0828 13:38:51.607052    2058 log.go:172] (0x400093e0b0) (0x4000ab8000) Stream removed, broadcasting: 5\n"
Aug 28 13:38:51.616: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:38:51.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7839" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:17.311 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":82,"skipped":1184,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:38:51.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5441
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug 28 13:38:51.797: INFO: Found 0 stateful pods, waiting for 3
Aug 28 13:39:01.807: INFO: Found 2 stateful pods, waiting for 3
Aug 28 13:39:12.095: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 13:39:12.095: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 13:39:12.095: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 28 13:39:12.135: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 28 13:39:22.820: INFO: Updating stateful set ss2
Aug 28 13:39:22.995: INFO: Waiting for Pod statefulset-5441/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 28 13:39:34.083: INFO: Found 2 stateful pods, waiting for 3
Aug 28 13:39:44.093: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 13:39:44.093: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 13:39:44.093: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 28 13:39:44.134: INFO: Updating stateful set ss2
Aug 28 13:39:44.195: INFO: Waiting for Pod statefulset-5441/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 28 13:39:54.210: INFO: Waiting for Pod statefulset-5441/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 28 13:40:04.295: INFO: Updating stateful set ss2
Aug 28 13:40:04.446: INFO: Waiting for StatefulSet statefulset-5441/ss2 to complete update
Aug 28 13:40:04.447: INFO: Waiting for Pod statefulset-5441/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 28 13:40:14.463: INFO: Deleting all statefulset in ns statefulset-5441
Aug 28 13:40:14.468: INFO: Scaling statefulset ss2 to 0
Aug 28 13:40:34.494: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 13:40:34.498: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:40:34.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5441" for this suite.

• [SLOW TEST:102.917 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":83,"skipped":1192,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:40:34.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:40:34.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9083'
Aug 28 13:40:42.368: INFO: stderr: ""
Aug 28 13:40:42.368: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 28 13:40:42.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9083'
Aug 28 13:40:44.608: INFO: stderr: ""
Aug 28 13:40:44.608: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 28 13:40:45.617: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:40:45.618: INFO: Found 0 / 1
Aug 28 13:40:46.616: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:40:46.617: INFO: Found 1 / 1
Aug 28 13:40:46.617: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 28 13:40:46.623: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:40:46.624: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 28 13:40:46.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe pod agnhost-master-f7l5k --namespace=kubectl-9083'
Aug 28 13:40:48.298: INFO: stderr: ""
Aug 28 13:40:48.299: INFO: stdout: "Name:         agnhost-master-f7l5k\nNamespace:    kubectl-9083\nPriority:     0\nNode:         kali-worker/172.18.0.15\nStart Time:   Fri, 28 Aug 2020 13:40:42 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.228\nIPs:\n  IP:           10.244.1.228\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://a2fa14b9b35fdc8337b3c32120dc96641228f07598d8ba1c336041d7970278b3\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 28 Aug 2020 13:40:45 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6fsn2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-6fsn2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-6fsn2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  6s    default-scheduler     Successfully assigned kubectl-9083/agnhost-master-f7l5k to kali-worker\n  Normal  Pulled     5s    kubelet, kali-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    3s    kubelet, kali-worker  Created container agnhost-master\n  Normal  Started    3s    kubelet, kali-worker  Started container agnhost-master\n"
Aug 28 13:40:48.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9083'
Aug 28 13:40:49.702: INFO: stderr: ""
Aug 28 13:40:49.702: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-9083\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-f7l5k\n"
Aug 28 13:40:49.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9083'
Aug 28 13:40:51.020: INFO: stderr: ""
Aug 28 13:40:51.020: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-9083\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.111.207.118\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.228:6379\nSession Affinity:  None\nEvents:            \n"
Aug 28 13:40:51.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe node kali-control-plane'
Aug 28 13:40:52.419: INFO: stderr: ""
Aug 28 13:40:52.419: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 23 Aug 2020 15:12:35 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Fri, 28 Aug 2020 13:40:52 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 28 Aug 2020 13:38:09 +0000   Sun, 23 Aug 2020 15:12:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 28 Aug 2020 13:38:09 +0000   Sun, 23 Aug 2020 15:12:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 28 Aug 2020 13:38:09 +0000   Sun, 23 Aug 2020 15:12:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 28 Aug 2020 13:38:09 +0000   Sun, 23 Aug 2020 15:13:30 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.16\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2cdec6c7db1f4ffb92010874f8f6c78a\n  System UUID:                97843c5f-7109-4963-bbac-ed94fa5ea417\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu Groovy Gorilla (development branch)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.0-rc.1-4-g43366250\n  Kubelet Version:            v1.18.8\n  Kube-Proxy Version:         v1.18.8\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-4dkcx                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     4d22h\n  kube-system                 coredns-66bff467f8-wt2xm                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     4d22h\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d22h\n  kube-system                 kindnet-4vm7t                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      4d22h\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         4d22h\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         4d22h\n  kube-system                 kube-proxy-lnmvk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d22h\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         4d22h\n  local-path-storage          local-path-provisioner-5b4b545c55-bfxpd       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d22h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
Aug 28 13:40:52.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe namespace kubectl-9083'
Aug 28 13:40:53.718: INFO: stderr: ""
Aug 28 13:40:53.719: INFO: stdout: "Name:         kubectl-9083\nLabels:       e2e-framework=kubectl\n              e2e-run=d954fb53-acf7-4ebc-8e0d-160968c94da0\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:40:53.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9083" for this suite.

• [SLOW TEST:19.184 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":84,"skipped":1195,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:40:53.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 28 13:41:03.942: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 28 13:41:03.968: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 28 13:41:05.969: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 28 13:41:05.976: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 28 13:41:07.969: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 28 13:41:07.977: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 28 13:41:09.969: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 28 13:41:10.036: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:41:10.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3563" for this suite.

• [SLOW TEST:16.327 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1206,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:41:10.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 28 13:41:19.006: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:41:20.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1802" for this suite.

• [SLOW TEST:11.635 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1209,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:41:21.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 28 13:41:35.492: INFO: Successfully updated pod "labelsupdate144bf443-08b0-4be1-9969-625215970efb"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:41:37.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2912" for this suite.

• [SLOW TEST:15.840 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1230,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:41:37.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 13:41:40.528: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 13:41:42.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 13:41:45.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 13:41:46.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734218900, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 13:41:50.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 28 13:41:50.314: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:41:50.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-460" for this suite.
STEP: Destroying namespace "webhook-460-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.216 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":88,"skipped":1261,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:41:50.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1396.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1396.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1396.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1396.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1396.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1396.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 28 13:41:59.602: INFO: DNS probes using dns-1396/dns-test-6f06c944-dbde-4686-900a-5b10ba78d1f9 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:41:59.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1396" for this suite.

• [SLOW TEST:9.923 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":89,"skipped":1285,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:42:00.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 28 13:42:17.495: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:42:19.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9585" for this suite.

• [SLOW TEST:19.467 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1288,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:42:20.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-scnd
STEP: Creating a pod to test atomic-volume-subpath
Aug 28 13:42:21.675: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-scnd" in namespace "subpath-201" to be "Succeeded or Failed"
Aug 28 13:42:22.344: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Pending", Reason="", readiness=false. Elapsed: 668.46687ms
Aug 28 13:42:24.415: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739889891s
Aug 28 13:42:26.423: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.747099891s
Aug 28 13:42:28.430: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 6.754135995s
Aug 28 13:42:30.437: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 8.76194877s
Aug 28 13:42:32.444: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 10.768182441s
Aug 28 13:42:34.515: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 12.839528096s
Aug 28 13:42:36.757: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 15.081414568s
Aug 28 13:42:38.763: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 17.087642623s
Aug 28 13:42:40.769: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 19.093814088s
Aug 28 13:42:43.175: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 21.499835702s
Aug 28 13:42:45.221: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 23.545397311s
Aug 28 13:42:47.432: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Running", Reason="", readiness=true. Elapsed: 25.756355406s
Aug 28 13:42:49.463: INFO: Pod "pod-subpath-test-secret-scnd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.787184268s
STEP: Saw pod success
Aug 28 13:42:49.463: INFO: Pod "pod-subpath-test-secret-scnd" satisfied condition "Succeeded or Failed"
Aug 28 13:42:49.468: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-scnd container test-container-subpath-secret-scnd: 
STEP: delete the pod
Aug 28 13:42:50.467: INFO: Waiting for pod pod-subpath-test-secret-scnd to disappear
Aug 28 13:42:50.470: INFO: Pod pod-subpath-test-secret-scnd no longer exists
STEP: Deleting pod pod-subpath-test-secret-scnd
Aug 28 13:42:50.471: INFO: Deleting pod "pod-subpath-test-secret-scnd" in namespace "subpath-201"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:42:50.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-201" for this suite.

• [SLOW TEST:30.905 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":91,"skipped":1308,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:42:51.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:42:52.539: INFO: Create a RollingUpdate DaemonSet
Aug 28 13:42:52.546: INFO: Check that daemon pods launch on every node of the cluster
Aug 28 13:42:52.578: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:42:52.800: INFO: Number of nodes with available pods: 0
Aug 28 13:42:52.801: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:42:53.917: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:42:54.225: INFO: Number of nodes with available pods: 0
Aug 28 13:42:54.225: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:42:55.043: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:42:55.754: INFO: Number of nodes with available pods: 0
Aug 28 13:42:55.754: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:42:55.810: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:42:55.816: INFO: Number of nodes with available pods: 0
Aug 28 13:42:55.816: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:42:57.122: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:42:57.128: INFO: Number of nodes with available pods: 0
Aug 28 13:42:57.128: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:42:58.169: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:42:58.204: INFO: Number of nodes with available pods: 0
Aug 28 13:42:58.204: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:42:58.909: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:42:59.278: INFO: Number of nodes with available pods: 0
Aug 28 13:42:59.279: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:42:59.831: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:43:00.180: INFO: Number of nodes with available pods: 0
Aug 28 13:43:00.180: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:43:00.863: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:43:00.870: INFO: Number of nodes with available pods: 0
Aug 28 13:43:00.871: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:43:01.811: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:43:01.817: INFO: Number of nodes with available pods: 0
Aug 28 13:43:01.817: INFO: Node kali-worker is running more than one daemon pod
Aug 28 13:43:02.811: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:43:02.817: INFO: Number of nodes with available pods: 2
Aug 28 13:43:02.817: INFO: Number of running nodes: 2, number of available pods: 2
Aug 28 13:43:02.817: INFO: Update the DaemonSet to trigger a rollout
Aug 28 13:43:02.830: INFO: Updating DaemonSet daemon-set
Aug 28 13:43:09.591: INFO: Roll back the DaemonSet before rollout is complete
Aug 28 13:43:09.678: INFO: Updating DaemonSet daemon-set
Aug 28 13:43:09.678: INFO: Make sure DaemonSet rollback is complete
Aug 28 13:43:09.722: INFO: Wrong image for pod: daemon-set-rx76v. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 28 13:43:09.722: INFO: Pod daemon-set-rx76v is not available
Aug 28 13:43:09.758: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:43:10.766: INFO: Wrong image for pod: daemon-set-rx76v. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 28 13:43:10.766: INFO: Pod daemon-set-rx76v is not available
Aug 28 13:43:10.775: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:43:11.799: INFO: Wrong image for pod: daemon-set-rx76v. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 28 13:43:11.799: INFO: Pod daemon-set-rx76v is not available
Aug 28 13:43:11.809: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:43:13.077: INFO: Wrong image for pod: daemon-set-rx76v. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 28 13:43:13.077: INFO: Pod daemon-set-rx76v is not available
Aug 28 13:43:13.085: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:43:13.768: INFO: Wrong image for pod: daemon-set-rx76v. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 28 13:43:13.768: INFO: Pod daemon-set-rx76v is not available
Aug 28 13:43:13.909: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 13:43:15.828: INFO: Pod daemon-set-8tp5r is not available
Aug 28 13:43:15.915: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2605, will wait for the garbage collector to delete the pods
Aug 28 13:43:16.935: INFO: Deleting DaemonSet.extensions daemon-set took: 272.236265ms
Aug 28 13:43:17.236: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.747113ms
Aug 28 13:43:27.841: INFO: Number of nodes with available pods: 0
Aug 28 13:43:27.841: INFO: Number of running nodes: 0, number of available pods: 0
Aug 28 13:43:27.844: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2605/daemonsets","resourceVersion":"1760395"},"items":null}

Aug 28 13:43:27.848: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2605/pods","resourceVersion":"1760395"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:43:27.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2605" for this suite.

• [SLOW TEST:36.896 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":92,"skipped":1310,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:43:27.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Aug 28 13:43:34.479: INFO: Pod pod-hostip-332b4405-0391-4da0-8a69-7c766a431db7 has hostIP: 172.18.0.15
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:43:34.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-782" for this suite.

• [SLOW TEST:6.642 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1342,"failed":0}
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:43:34.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 28 13:43:37.906: INFO: Waiting up to 5m0s for pod "downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3" in namespace "downward-api-9689" to be "Succeeded or Failed"
Aug 28 13:43:38.420: INFO: Pod "downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3": Phase="Pending", Reason="", readiness=false. Elapsed: 513.696346ms
Aug 28 13:43:40.769: INFO: Pod "downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.863072635s
Aug 28 13:43:43.055: INFO: Pod "downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.148849578s
Aug 28 13:43:45.080: INFO: Pod "downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.173795973s
Aug 28 13:43:47.222: INFO: Pod "downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.316232473s
Aug 28 13:43:49.229: INFO: Pod "downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.322609374s
Aug 28 13:43:51.350: INFO: Pod "downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.443693289s
STEP: Saw pod success
Aug 28 13:43:51.350: INFO: Pod "downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3" satisfied condition "Succeeded or Failed"
Aug 28 13:43:51.355: INFO: Trying to get logs from node kali-worker2 pod downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3 container dapi-container: 
STEP: delete the pod
Aug 28 13:43:51.677: INFO: Waiting for pod downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3 to disappear
Aug 28 13:43:52.043: INFO: Pod downward-api-9c5881c4-b8d3-4e3d-936e-1481128b47d3 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:43:52.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9689" for this suite.

• [SLOW TEST:17.803 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1348,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:43:52.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8826
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-8826
I0828 13:43:53.407279      11 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8826, replica count: 2
I0828 13:43:56.458568      11 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 13:43:59.459229      11 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 13:44:02.460010      11 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 28 13:44:02.460: INFO: Creating new exec pod
Aug 28 13:44:09.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-8826 execpodtw65g -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 28 13:44:11.048: INFO: stderr: "I0828 13:44:10.890345    2250 log.go:172] (0x40000f2420) (0x400089f4a0) Create stream\nI0828 13:44:10.894260    2250 log.go:172] (0x40000f2420) (0x400089f4a0) Stream added, broadcasting: 1\nI0828 13:44:10.904416    2250 log.go:172] (0x40000f2420) Reply frame received for 1\nI0828 13:44:10.905046    2250 log.go:172] (0x40000f2420) (0x4000b26000) Create stream\nI0828 13:44:10.905123    2250 log.go:172] (0x40000f2420) (0x4000b26000) Stream added, broadcasting: 3\nI0828 13:44:10.907044    2250 log.go:172] (0x40000f2420) Reply frame received for 3\nI0828 13:44:10.907260    2250 log.go:172] (0x40000f2420) (0x4000b260a0) Create stream\nI0828 13:44:10.907333    2250 log.go:172] (0x40000f2420) (0x4000b260a0) Stream added, broadcasting: 5\nI0828 13:44:10.908686    2250 log.go:172] (0x40000f2420) Reply frame received for 5\nI0828 13:44:11.015352    2250 log.go:172] (0x40000f2420) Data frame received for 3\nI0828 13:44:11.015509    2250 log.go:172] (0x40000f2420) Data frame received for 1\nI0828 13:44:11.015843    2250 log.go:172] (0x4000b26000) (3) Data frame handling\nI0828 13:44:11.016042    2250 log.go:172] (0x400089f4a0) (1) Data frame handling\nI0828 13:44:11.016598    2250 log.go:172] (0x40000f2420) Data frame received for 5\nI0828 13:44:11.016711    2250 log.go:172] (0x4000b260a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0828 13:44:11.018298    2250 log.go:172] (0x4000b260a0) (5) Data frame sent\nI0828 13:44:11.018412    2250 log.go:172] (0x40000f2420) Data frame received for 5\nI0828 13:44:11.018464    2250 log.go:172] (0x4000b260a0) (5) Data frame handling\nI0828 13:44:11.018546    2250 log.go:172] (0x400089f4a0) (1) Data frame sent\nI0828 13:44:11.019038    2250 log.go:172] (0x40000f2420) (0x400089f4a0) Stream removed, broadcasting: 1\nI0828 13:44:11.021634    2250 log.go:172] (0x40000f2420) Go away received\nI0828 13:44:11.024045    2250 log.go:172] (0x40000f2420) (0x400089f4a0) Stream removed, broadcasting: 1\nI0828 13:44:11.024407    2250 log.go:172] (0x40000f2420) (0x4000b26000) Stream removed, broadcasting: 3\nI0828 13:44:11.024969    2250 log.go:172] (0x40000f2420) (0x4000b260a0) Stream removed, broadcasting: 5\n"
Aug 28 13:44:11.049: INFO: stdout: ""
Aug 28 13:44:11.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-8826 execpodtw65g -- /bin/sh -x -c nc -zv -t -w 2 10.100.65.84 80'
Aug 28 13:44:12.545: INFO: stderr: "I0828 13:44:12.425027    2274 log.go:172] (0x4000a040b0) (0x4000972140) Create stream\nI0828 13:44:12.427392    2274 log.go:172] (0x4000a040b0) (0x4000972140) Stream added, broadcasting: 1\nI0828 13:44:12.442030    2274 log.go:172] (0x4000a040b0) Reply frame received for 1\nI0828 13:44:12.442862    2274 log.go:172] (0x4000a040b0) (0x40009721e0) Create stream\nI0828 13:44:12.442936    2274 log.go:172] (0x4000a040b0) (0x40009721e0) Stream added, broadcasting: 3\nI0828 13:44:12.445403    2274 log.go:172] (0x4000a040b0) Reply frame received for 3\nI0828 13:44:12.446037    2274 log.go:172] (0x4000a040b0) (0x40007f9180) Create stream\nI0828 13:44:12.446176    2274 log.go:172] (0x4000a040b0) (0x40007f9180) Stream added, broadcasting: 5\nI0828 13:44:12.448053    2274 log.go:172] (0x4000a040b0) Reply frame received for 5\nI0828 13:44:12.520443    2274 log.go:172] (0x4000a040b0) Data frame received for 5\nI0828 13:44:12.521073    2274 log.go:172] (0x40007f9180) (5) Data frame handling\nI0828 13:44:12.521619    2274 log.go:172] (0x4000a040b0) Data frame received for 3\nI0828 13:44:12.521709    2274 log.go:172] (0x40009721e0) (3) Data frame handling\nI0828 13:44:12.522855    2274 log.go:172] (0x40007f9180) (5) Data frame sent\n+ nc -zv -t -w 2 10.100.65.84 80\nConnection to 10.100.65.84 80 port [tcp/http] succeeded!\nI0828 13:44:12.523077    2274 log.go:172] (0x4000a040b0) Data frame received for 5\nI0828 13:44:12.523137    2274 log.go:172] (0x40007f9180) (5) Data frame handling\nI0828 13:44:12.525133    2274 log.go:172] (0x4000a040b0) Data frame received for 1\nI0828 13:44:12.525194    2274 log.go:172] (0x4000972140) (1) Data frame handling\nI0828 13:44:12.525272    2274 log.go:172] (0x4000972140) (1) Data frame sent\nI0828 13:44:12.526043    2274 log.go:172] (0x4000a040b0) (0x4000972140) Stream removed, broadcasting: 1\nI0828 13:44:12.528135    2274 log.go:172] (0x4000a040b0) Go away received\nI0828 13:44:12.531037    2274 log.go:172] (0x4000a040b0) (0x4000972140) Stream removed, broadcasting: 1\nI0828 13:44:12.531575    2274 log.go:172] (0x4000a040b0) (0x40009721e0) Stream removed, broadcasting: 3\nI0828 13:44:12.531733    2274 log.go:172] (0x4000a040b0) (0x40007f9180) Stream removed, broadcasting: 5\n"
Aug 28 13:44:12.546: INFO: stdout: ""
Aug 28 13:44:12.546: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:44:12.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8826" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:20.187 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":95,"skipped":1353,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:44:12.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:44:12.731: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-4e4671d2-4b4e-42a4-a1d8-0977a2a7d5b4" in namespace "security-context-test-6013" to be "Succeeded or Failed"
Aug 28 13:44:12.743: INFO: Pod "busybox-readonly-false-4e4671d2-4b4e-42a4-a1d8-0977a2a7d5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.771149ms
Aug 28 13:44:14.751: INFO: Pod "busybox-readonly-false-4e4671d2-4b4e-42a4-a1d8-0977a2a7d5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019049899s
Aug 28 13:44:16.797: INFO: Pod "busybox-readonly-false-4e4671d2-4b4e-42a4-a1d8-0977a2a7d5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06587867s
Aug 28 13:44:18.990: INFO: Pod "busybox-readonly-false-4e4671d2-4b4e-42a4-a1d8-0977a2a7d5b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.258312236s
Aug 28 13:44:18.990: INFO: Pod "busybox-readonly-false-4e4671d2-4b4e-42a4-a1d8-0977a2a7d5b4" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:44:18.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6013" for this suite.

• [SLOW TEST:6.960 seconds]
[k8s.io] Security Context
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with readOnlyRootFilesystem
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1372,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:44:19.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9206
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-9206
Aug 28 13:44:21.121: INFO: Found 0 stateful pods, waiting for 1
Aug 28 13:44:31.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 28 13:44:31.164: INFO: Deleting all statefulset in ns statefulset-9206
Aug 28 13:44:31.183: INFO: Scaling statefulset ss to 0
Aug 28 13:44:41.326: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 13:44:41.332: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:44:41.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9206" for this suite.

• [SLOW TEST:21.803 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":97,"skipped":1397,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:44:41.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 13:44:41.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ddc033e-7f7f-4832-87ed-eb907f43a550" in namespace "projected-7130" to be "Succeeded or Failed"
Aug 28 13:44:41.484: INFO: Pod "downwardapi-volume-0ddc033e-7f7f-4832-87ed-eb907f43a550": Phase="Pending", Reason="", readiness=false. Elapsed: 14.270327ms
Aug 28 13:44:43.491: INFO: Pod "downwardapi-volume-0ddc033e-7f7f-4832-87ed-eb907f43a550": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021660675s
Aug 28 13:44:45.498: INFO: Pod "downwardapi-volume-0ddc033e-7f7f-4832-87ed-eb907f43a550": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028498352s
STEP: Saw pod success
Aug 28 13:44:45.498: INFO: Pod "downwardapi-volume-0ddc033e-7f7f-4832-87ed-eb907f43a550" satisfied condition "Succeeded or Failed"
Aug 28 13:44:45.505: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-0ddc033e-7f7f-4832-87ed-eb907f43a550 container client-container: 
STEP: delete the pod
Aug 28 13:44:45.803: INFO: Waiting for pod downwardapi-volume-0ddc033e-7f7f-4832-87ed-eb907f43a550 to disappear
Aug 28 13:44:45.825: INFO: Pod downwardapi-volume-0ddc033e-7f7f-4832-87ed-eb907f43a550 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:44:45.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7130" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1463,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:44:45.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug 28 13:44:46.073: INFO: namespace kubectl-9414
Aug 28 13:44:46.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9414'
Aug 28 13:44:48.133: INFO: stderr: ""
Aug 28 13:44:48.133: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 28 13:44:49.143: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:44:49.143: INFO: Found 0 / 1
Aug 28 13:44:50.218: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:44:50.218: INFO: Found 0 / 1
Aug 28 13:44:51.277: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:44:51.277: INFO: Found 0 / 1
Aug 28 13:44:52.369: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:44:52.369: INFO: Found 0 / 1
Aug 28 13:44:53.158: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:44:53.158: INFO: Found 0 / 1
Aug 28 13:44:54.140: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:44:54.140: INFO: Found 1 / 1
Aug 28 13:44:54.140: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 28 13:44:54.143: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 13:44:54.144: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 28 13:44:54.144: INFO: wait on agnhost-master startup in kubectl-9414 
Aug 28 13:44:54.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs agnhost-master-n5vnp agnhost-master --namespace=kubectl-9414'
Aug 28 13:44:55.408: INFO: stderr: ""
Aug 28 13:44:55.408: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 28 13:44:55.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9414'
Aug 28 13:44:56.725: INFO: stderr: ""
Aug 28 13:44:56.725: INFO: stdout: "service/rm2 exposed\n"
Aug 28 13:44:56.731: INFO: Service rm2 in namespace kubectl-9414 found.
STEP: exposing service
Aug 28 13:44:58.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9414'
Aug 28 13:45:00.140: INFO: stderr: ""
Aug 28 13:45:00.140: INFO: stdout: "service/rm3 exposed\n"
Aug 28 13:45:00.169: INFO: Service rm3 in namespace kubectl-9414 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:45:02.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9414" for this suite.

• [SLOW TEST:16.290 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":99,"skipped":1484,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:45:02.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 13:45:05.239: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 13:45:07.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219105, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219105, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219105, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219105, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 13:45:10.360: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:45:20.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1323" for this suite.
STEP: Destroying namespace "webhook-1323-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.085 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":100,"skipped":1486,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:45:21.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-8abcc016-ee99-40f1-85b1-b70812c61760
STEP: Creating a pod to test consume configMaps
Aug 28 13:45:21.832: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5707e881-2fdc-4fb6-ad65-2e498d3a065b" in namespace "projected-2540" to be "Succeeded or Failed"
Aug 28 13:45:21.885: INFO: Pod "pod-projected-configmaps-5707e881-2fdc-4fb6-ad65-2e498d3a065b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.690334ms
Aug 28 13:45:23.892: INFO: Pod "pod-projected-configmaps-5707e881-2fdc-4fb6-ad65-2e498d3a065b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060189415s
Aug 28 13:45:25.974: INFO: Pod "pod-projected-configmaps-5707e881-2fdc-4fb6-ad65-2e498d3a065b": Phase="Running", Reason="", readiness=true. Elapsed: 4.14233731s
Aug 28 13:45:27.981: INFO: Pod "pod-projected-configmaps-5707e881-2fdc-4fb6-ad65-2e498d3a065b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.149178288s
STEP: Saw pod success
Aug 28 13:45:27.981: INFO: Pod "pod-projected-configmaps-5707e881-2fdc-4fb6-ad65-2e498d3a065b" satisfied condition "Succeeded or Failed"
Aug 28 13:45:27.987: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-5707e881-2fdc-4fb6-ad65-2e498d3a065b container projected-configmap-volume-test: 
STEP: delete the pod
Aug 28 13:45:28.010: INFO: Waiting for pod pod-projected-configmaps-5707e881-2fdc-4fb6-ad65-2e498d3a065b to disappear
Aug 28 13:45:28.039: INFO: Pod pod-projected-configmaps-5707e881-2fdc-4fb6-ad65-2e498d3a065b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:45:28.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2540" for this suite.

• [SLOW TEST:6.769 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1561,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:45:28.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:45:28.140: INFO: Creating ReplicaSet my-hostname-basic-64e8d8a7-5be6-45ed-b2a8-b03dc1e3d889
Aug 28 13:45:28.158: INFO: Pod name my-hostname-basic-64e8d8a7-5be6-45ed-b2a8-b03dc1e3d889: Found 0 pods out of 1
Aug 28 13:45:33.525: INFO: Pod name my-hostname-basic-64e8d8a7-5be6-45ed-b2a8-b03dc1e3d889: Found 1 pods out of 1
Aug 28 13:45:33.526: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-64e8d8a7-5be6-45ed-b2a8-b03dc1e3d889" is running
Aug 28 13:45:33.718: INFO: Pod "my-hostname-basic-64e8d8a7-5be6-45ed-b2a8-b03dc1e3d889-9wzdx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 13:45:28 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 13:45:32 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 13:45:32 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 13:45:28 +0000 UTC Reason: Message:}])
Aug 28 13:45:33.718: INFO: Trying to dial the pod
Aug 28 13:45:38.742: INFO: Controller my-hostname-basic-64e8d8a7-5be6-45ed-b2a8-b03dc1e3d889: Got expected result from replica 1 [my-hostname-basic-64e8d8a7-5be6-45ed-b2a8-b03dc1e3d889-9wzdx]: "my-hostname-basic-64e8d8a7-5be6-45ed-b2a8-b03dc1e3d889-9wzdx", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:45:38.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7046" for this suite.

• [SLOW TEST:10.701 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":102,"skipped":1612,"failed":0}
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:45:38.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:45:56.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2643" for this suite.

• [SLOW TEST:17.439 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":103,"skipped":1612,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:45:56.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-9613
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9613 to expose endpoints map[]
Aug 28 13:45:56.365: INFO: successfully validated that service multi-endpoint-test in namespace services-9613 exposes endpoints map[] (13.67081ms elapsed)
STEP: Creating pod pod1 in namespace services-9613
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9613 to expose endpoints map[pod1:[100]]
Aug 28 13:46:00.486: INFO: successfully validated that service multi-endpoint-test in namespace services-9613 exposes endpoints map[pod1:[100]] (4.108488657s elapsed)
STEP: Creating pod pod2 in namespace services-9613
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9613 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 28 13:46:04.783: INFO: successfully validated that service multi-endpoint-test in namespace services-9613 exposes endpoints map[pod1:[100] pod2:[101]] (4.289576259s elapsed)
STEP: Deleting pod pod1 in namespace services-9613
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9613 to expose endpoints map[pod2:[101]]
Aug 28 13:46:04.825: INFO: successfully validated that service multi-endpoint-test in namespace services-9613 exposes endpoints map[pod2:[101]] (35.203846ms elapsed)
STEP: Deleting pod pod2 in namespace services-9613
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9613 to expose endpoints map[]
Aug 28 13:46:05.893: INFO: successfully validated that service multi-endpoint-test in namespace services-9613 exposes endpoints map[] (1.062118737s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:46:06.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9613" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:10.046 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":104,"skipped":1624,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:46:06.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:46:06.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 28 13:46:27.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4995 create -f -'
Aug 28 13:46:37.052: INFO: stderr: ""
Aug 28 13:46:37.052: INFO: stdout: "e2e-test-crd-publish-openapi-9236-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 28 13:46:37.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4995 delete e2e-test-crd-publish-openapi-9236-crds test-cr'
Aug 28 13:46:38.599: INFO: stderr: ""
Aug 28 13:46:38.599: INFO: stdout: "e2e-test-crd-publish-openapi-9236-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 28 13:46:38.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4995 apply -f -'
Aug 28 13:46:40.465: INFO: stderr: ""
Aug 28 13:46:40.465: INFO: stdout: "e2e-test-crd-publish-openapi-9236-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 28 13:46:40.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4995 delete e2e-test-crd-publish-openapi-9236-crds test-cr'
Aug 28 13:46:42.133: INFO: stderr: ""
Aug 28 13:46:42.133: INFO: stdout: "e2e-test-crd-publish-openapi-9236-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 28 13:46:42.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9236-crds'
Aug 28 13:46:43.884: INFO: stderr: ""
Aug 28 13:46:43.884: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9236-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:47:04.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4995" for this suite.

• [SLOW TEST:58.017 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":105,"skipped":1629,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:47:04.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 28 13:47:05.990: INFO: Waiting up to 5m0s for pod "pod-16368a92-872d-4778-837e-5c822c60cda3" in namespace "emptydir-134" to be "Succeeded or Failed"
Aug 28 13:47:06.823: INFO: Pod "pod-16368a92-872d-4778-837e-5c822c60cda3": Phase="Pending", Reason="", readiness=false. Elapsed: 832.867379ms
Aug 28 13:47:08.829: INFO: Pod "pod-16368a92-872d-4778-837e-5c822c60cda3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.837965513s
Aug 28 13:47:11.355: INFO: Pod "pod-16368a92-872d-4778-837e-5c822c60cda3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.364710665s
Aug 28 13:47:13.812: INFO: Pod "pod-16368a92-872d-4778-837e-5c822c60cda3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.821063161s
Aug 28 13:47:16.012: INFO: Pod "pod-16368a92-872d-4778-837e-5c822c60cda3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021397949s
Aug 28 13:47:18.017: INFO: Pod "pod-16368a92-872d-4778-837e-5c822c60cda3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.026612632s
STEP: Saw pod success
Aug 28 13:47:18.017: INFO: Pod "pod-16368a92-872d-4778-837e-5c822c60cda3" satisfied condition "Succeeded or Failed"
Aug 28 13:47:18.021: INFO: Trying to get logs from node kali-worker2 pod pod-16368a92-872d-4778-837e-5c822c60cda3 container test-container: 
STEP: delete the pod
Aug 28 13:47:18.135: INFO: Waiting for pod pod-16368a92-872d-4778-837e-5c822c60cda3 to disappear
Aug 28 13:47:18.149: INFO: Pod pod-16368a92-872d-4778-837e-5c822c60cda3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:47:18.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-134" for this suite.

• [SLOW TEST:13.896 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1639,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:47:18.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-7cb8a191-5bea-41a5-a959-9984d3f78df9 in namespace container-probe-9150
Aug 28 13:47:22.321: INFO: Started pod busybox-7cb8a191-5bea-41a5-a959-9984d3f78df9 in namespace container-probe-9150
STEP: checking the pod's current state and verifying that restartCount is present
Aug 28 13:47:22.326: INFO: Initial restart count of pod busybox-7cb8a191-5bea-41a5-a959-9984d3f78df9 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:51:23.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9150" for this suite.

• [SLOW TEST:245.433 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1663,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:51:23.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Aug 28 13:51:25.248: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 28 13:51:25.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1378'
Aug 28 13:51:28.896: INFO: stderr: ""
Aug 28 13:51:28.897: INFO: stdout: "service/agnhost-slave created\n"
Aug 28 13:51:28.898: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 28 13:51:28.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1378'
Aug 28 13:51:32.236: INFO: stderr: ""
Aug 28 13:51:32.236: INFO: stdout: "service/agnhost-master created\n"
Aug 28 13:51:32.238: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 28 13:51:32.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1378'
Aug 28 13:51:35.391: INFO: stderr: ""
Aug 28 13:51:35.391: INFO: stdout: "service/frontend created\n"
Aug 28 13:51:35.393: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 28 13:51:35.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1378'
Aug 28 13:51:37.809: INFO: stderr: ""
Aug 28 13:51:37.809: INFO: stdout: "deployment.apps/frontend created\n"
Aug 28 13:51:37.810: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 28 13:51:37.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1378'
Aug 28 13:51:39.497: INFO: stderr: ""
Aug 28 13:51:39.497: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 28 13:51:39.498: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 28 13:51:39.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1378'
Aug 28 13:51:42.214: INFO: stderr: ""
Aug 28 13:51:42.214: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 28 13:51:42.214: INFO: Waiting for all frontend pods to be Running.
Aug 28 13:51:52.266: INFO: Waiting for frontend to serve content.
Aug 28 13:51:53.620: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Aug 28 13:51:58.631: INFO: Trying to add a new entry to the guestbook.
Aug 28 13:51:58.644: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 28 13:51:58.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1378'
Aug 28 13:51:59.950: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 28 13:51:59.950: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 28 13:51:59.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1378'
Aug 28 13:52:01.241: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 28 13:52:01.241: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 28 13:52:01.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1378'
Aug 28 13:52:02.609: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 28 13:52:02.609: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 28 13:52:02.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1378'
Aug 28 13:52:03.862: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 28 13:52:03.862: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 28 13:52:03.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1378'
Aug 28 13:52:05.575: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 28 13:52:05.575: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 28 13:52:05.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1378'
Aug 28 13:52:06.947: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 28 13:52:06.947: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:52:06.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1378" for this suite.

• [SLOW TEST:44.120 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":108,"skipped":1678,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:52:07.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 13:52:11.877: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 13:52:14.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 13:52:16.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 13:52:18.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 13:52:20.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734219531, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 13:52:24.061: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:52:24.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:52:25.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7701" for this suite.
STEP: Destroying namespace "webhook-7701-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.522 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":109,"skipped":1678,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:52:26.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:52:34.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-312" for this suite.

• [SLOW TEST:7.828 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":110,"skipped":1679,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:52:34.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 28 13:52:34.217: INFO: Waiting up to 5m0s for pod "downward-api-4fd2c63c-05d4-486e-95c9-72b4cf47b4ca" in namespace "downward-api-6433" to be "Succeeded or Failed"
Aug 28 13:52:34.223: INFO: Pod "downward-api-4fd2c63c-05d4-486e-95c9-72b4cf47b4ca": Phase="Pending", Reason="", readiness=false. Elapsed: 5.719655ms
Aug 28 13:52:36.228: INFO: Pod "downward-api-4fd2c63c-05d4-486e-95c9-72b4cf47b4ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011302803s
Aug 28 13:52:38.589: INFO: Pod "downward-api-4fd2c63c-05d4-486e-95c9-72b4cf47b4ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371874155s
Aug 28 13:52:40.596: INFO: Pod "downward-api-4fd2c63c-05d4-486e-95c9-72b4cf47b4ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.379064086s
STEP: Saw pod success
Aug 28 13:52:40.596: INFO: Pod "downward-api-4fd2c63c-05d4-486e-95c9-72b4cf47b4ca" satisfied condition "Succeeded or Failed"
Aug 28 13:52:40.601: INFO: Trying to get logs from node kali-worker pod downward-api-4fd2c63c-05d4-486e-95c9-72b4cf47b4ca container dapi-container: 
STEP: delete the pod
Aug 28 13:52:41.382: INFO: Waiting for pod downward-api-4fd2c63c-05d4-486e-95c9-72b4cf47b4ca to disappear
Aug 28 13:52:41.583: INFO: Pod downward-api-4fd2c63c-05d4-486e-95c9-72b4cf47b4ca no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:52:41.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6433" for this suite.

• [SLOW TEST:7.546 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1683,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:52:41.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-6490
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 28 13:52:41.992: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 28 13:52:42.277: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 13:52:44.286: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 13:52:46.485: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 13:52:48.520: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 13:52:50.553: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 13:52:52.287: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 13:52:54.377: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 13:52:56.284: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 13:52:58.286: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 13:53:00.286: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 13:53:02.284: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 13:53:04.284: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 13:53:06.404: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 28 13:53:06.416: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 28 13:53:08.468: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 28 13:53:10.966: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 28 13:53:17.955: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.248:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6490 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 13:53:17.955: INFO: >>> kubeConfig: /root/.kube/config
I0828 13:53:18.049763      11 log.go:172] (0x4000ee4210) (0x400120e000) Create stream
I0828 13:53:18.050014      11 log.go:172] (0x4000ee4210) (0x400120e000) Stream added, broadcasting: 1
I0828 13:53:18.054469      11 log.go:172] (0x4000ee4210) Reply frame received for 1
I0828 13:53:18.054649      11 log.go:172] (0x4000ee4210) (0x4002706aa0) Create stream
I0828 13:53:18.054736      11 log.go:172] (0x4000ee4210) (0x4002706aa0) Stream added, broadcasting: 3
I0828 13:53:18.056157      11 log.go:172] (0x4000ee4210) Reply frame received for 3
I0828 13:53:18.056281      11 log.go:172] (0x4000ee4210) (0x4002706b40) Create stream
I0828 13:53:18.056347      11 log.go:172] (0x4000ee4210) (0x4002706b40) Stream added, broadcasting: 5
I0828 13:53:18.057695      11 log.go:172] (0x4000ee4210) Reply frame received for 5
I0828 13:53:18.129309      11 log.go:172] (0x4000ee4210) Data frame received for 3
I0828 13:53:18.129518      11 log.go:172] (0x4002706aa0) (3) Data frame handling
I0828 13:53:18.129683      11 log.go:172] (0x4000ee4210) Data frame received for 5
I0828 13:53:18.129867      11 log.go:172] (0x4002706b40) (5) Data frame handling
I0828 13:53:18.130041      11 log.go:172] (0x4002706aa0) (3) Data frame sent
I0828 13:53:18.130161      11 log.go:172] (0x4000ee4210) Data frame received for 3
I0828 13:53:18.130235      11 log.go:172] (0x4002706aa0) (3) Data frame handling
I0828 13:53:18.130994      11 log.go:172] (0x4000ee4210) Data frame received for 1
I0828 13:53:18.131113      11 log.go:172] (0x400120e000) (1) Data frame handling
I0828 13:53:18.131244      11 log.go:172] (0x400120e000) (1) Data frame sent
I0828 13:53:18.131363      11 log.go:172] (0x4000ee4210) (0x400120e000) Stream removed, broadcasting: 1
I0828 13:53:18.131497      11 log.go:172] (0x4000ee4210) Go away received
I0828 13:53:18.131912      11 log.go:172] (0x4000ee4210) (0x400120e000) Stream removed, broadcasting: 1
I0828 13:53:18.132028      11 log.go:172] (0x4000ee4210) (0x4002706aa0) Stream removed, broadcasting: 3
I0828 13:53:18.132120      11 log.go:172] (0x4000ee4210) (0x4002706b40) Stream removed, broadcasting: 5
Aug 28 13:53:18.132: INFO: Found all expected endpoints: [netserver-0]
Aug 28 13:53:18.138: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.228:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6490 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 13:53:18.138: INFO: >>> kubeConfig: /root/.kube/config
I0828 13:53:18.200520      11 log.go:172] (0x40031d4580) (0x40026b4640) Create stream
I0828 13:53:18.200679      11 log.go:172] (0x40031d4580) (0x40026b4640) Stream added, broadcasting: 1
I0828 13:53:18.205950      11 log.go:172] (0x40031d4580) Reply frame received for 1
I0828 13:53:18.206236      11 log.go:172] (0x40031d4580) (0x4001e92000) Create stream
I0828 13:53:18.206364      11 log.go:172] (0x40031d4580) (0x4001e92000) Stream added, broadcasting: 3
I0828 13:53:18.208411      11 log.go:172] (0x40031d4580) Reply frame received for 3
I0828 13:53:18.208591      11 log.go:172] (0x40031d4580) (0x4002706be0) Create stream
I0828 13:53:18.208670      11 log.go:172] (0x40031d4580) (0x4002706be0) Stream added, broadcasting: 5
I0828 13:53:18.210428      11 log.go:172] (0x40031d4580) Reply frame received for 5
I0828 13:53:18.288409      11 log.go:172] (0x40031d4580) Data frame received for 3
I0828 13:53:18.288599      11 log.go:172] (0x4001e92000) (3) Data frame handling
I0828 13:53:18.288878      11 log.go:172] (0x40031d4580) Data frame received for 5
I0828 13:53:18.289044      11 log.go:172] (0x4002706be0) (5) Data frame handling
I0828 13:53:18.289241      11 log.go:172] (0x4001e92000) (3) Data frame sent
I0828 13:53:18.289387      11 log.go:172] (0x40031d4580) Data frame received for 3
I0828 13:53:18.289503      11 log.go:172] (0x4001e92000) (3) Data frame handling
I0828 13:53:18.289987      11 log.go:172] (0x40031d4580) Data frame received for 1
I0828 13:53:18.290127      11 log.go:172] (0x40026b4640) (1) Data frame handling
I0828 13:53:18.290235      11 log.go:172] (0x40026b4640) (1) Data frame sent
I0828 13:53:18.290335      11 log.go:172] (0x40031d4580) (0x40026b4640) Stream removed, broadcasting: 1
I0828 13:53:18.290472      11 log.go:172] (0x40031d4580) Go away received
I0828 13:53:18.290827      11 log.go:172] (0x40031d4580) (0x40026b4640) Stream removed, broadcasting: 1
I0828 13:53:18.290913      11 log.go:172] (0x40031d4580) (0x4001e92000) Stream removed, broadcasting: 3
I0828 13:53:18.290988      11 log.go:172] (0x40031d4580) (0x4002706be0) Stream removed, broadcasting: 5
Aug 28 13:53:18.291: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:53:18.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6490" for this suite.

• [SLOW TEST:36.684 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1691,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:53:18.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 13:53:18.974: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:53:20.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7369" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":113,"skipped":1715,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:53:21.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Aug 28 13:53:22.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-7883 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 28 13:53:23.789: INFO: stderr: ""
Aug 28 13:53:23.790: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Aug 28 13:53:23.790: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 28 13:53:23.790: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7883" to be "running and ready, or succeeded"
Aug 28 13:53:23.972: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 181.42682ms
Aug 28 13:53:25.994: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203831637s
Aug 28 13:53:28.289: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.499134001s
Aug 28 13:53:30.859: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.06844482s
Aug 28 13:53:32.866: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 9.075288961s
Aug 28 13:53:32.866: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 28 13:53:32.866: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 28 13:53:32.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7883'
Aug 28 13:53:34.650: INFO: stderr: ""
Aug 28 13:53:34.650: INFO: stdout: "I0828 13:53:31.651683       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/5n8x 559\nI0828 13:53:31.851898       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/bfk 262\nI0828 13:53:32.051815       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/qdl7 534\nI0828 13:53:32.251870       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/b2rk 492\nI0828 13:53:32.451808       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/6jg5 392\nI0828 13:53:32.651778       1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/5ss 308\nI0828 13:53:32.851889       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/r7g 360\nI0828 13:53:33.051766       1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/776m 498\nI0828 13:53:33.251829       1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/fww 228\nI0828 13:53:33.451852       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/rbj 561\nI0828 13:53:33.651809       1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/ktp 325\nI0828 13:53:33.851825       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/mtp 270\nI0828 13:53:34.051893       1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/7jv2 557\nI0828 13:53:34.251812       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/w7z 329\nI0828 13:53:34.451828       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/z8t 502\n"
STEP: limiting log lines
Aug 28 13:53:34.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7883 --tail=1'
Aug 28 13:53:36.195: INFO: stderr: ""
Aug 28 13:53:36.195: INFO: stdout: "I0828 13:53:36.051802       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/p8h9 415\n"
Aug 28 13:53:36.196: INFO: got output "I0828 13:53:36.051802       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/p8h9 415\n"
STEP: limiting log bytes
Aug 28 13:53:36.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7883 --limit-bytes=1'
Aug 28 13:53:37.499: INFO: stderr: ""
Aug 28 13:53:37.499: INFO: stdout: "I"
Aug 28 13:53:37.500: INFO: got output "I"
STEP: exposing timestamps
Aug 28 13:53:37.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7883 --tail=1 --timestamps'
Aug 28 13:53:39.479: INFO: stderr: ""
Aug 28 13:53:39.479: INFO: stdout: "2020-08-28T13:53:39.451983241Z I0828 13:53:39.451821       1 logs_generator.go:76] 39 GET /api/v1/namespaces/kube-system/pods/bc7t 376\n"
Aug 28 13:53:39.480: INFO: got output "2020-08-28T13:53:39.451983241Z I0828 13:53:39.451821       1 logs_generator.go:76] 39 GET /api/v1/namespaces/kube-system/pods/bc7t 376\n"
STEP: restricting to a time range
Aug 28 13:53:41.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7883 --since=1s'
Aug 28 13:53:44.076: INFO: stderr: ""
Aug 28 13:53:44.077: INFO: stdout: "I0828 13:53:43.251816       1 logs_generator.go:76] 58 PUT /api/v1/namespaces/kube-system/pods/sbc 529\nI0828 13:53:43.451846       1 logs_generator.go:76] 59 POST /api/v1/namespaces/ns/pods/v7lc 520\nI0828 13:53:43.651878       1 logs_generator.go:76] 60 POST /api/v1/namespaces/ns/pods/8zg 398\nI0828 13:53:43.851848       1 logs_generator.go:76] 61 PUT /api/v1/namespaces/default/pods/zdm 343\nI0828 13:53:44.052846       1 logs_generator.go:76] 62 GET /api/v1/namespaces/default/pods/fqvk 557\n"
Aug 28 13:53:44.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7883 --since=24h'
Aug 28 13:53:45.807: INFO: stderr: ""
Aug 28 13:53:45.808: INFO: stdout: "I0828 13:53:31.651683       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/5n8x 559\nI0828 13:53:31.851898       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/bfk 262\nI0828 13:53:32.051815       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/qdl7 534\nI0828 13:53:32.251870       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/b2rk 492\nI0828 13:53:32.451808       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/6jg5 392\nI0828 13:53:32.651778       1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/5ss 308\nI0828 13:53:32.851889       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/r7g 360\nI0828 13:53:33.051766       1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/776m 498\nI0828 13:53:33.251829       1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/fww 228\nI0828 13:53:33.451852       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/rbj 561\nI0828 13:53:33.651809       1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/ktp 325\nI0828 13:53:33.851825       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/mtp 270\nI0828 13:53:34.051893       1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/7jv2 557\nI0828 13:53:34.251812       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/w7z 329\nI0828 13:53:34.451828       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/z8t 502\nI0828 13:53:34.651867       1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/z56 312\nI0828 13:53:34.851849       1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/j7c 279\nI0828 13:53:35.051820       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/4f62 242\nI0828 13:53:35.251847       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/wr9 507\nI0828 13:53:35.451805       1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/5qmq 507\nI0828 13:53:35.651849       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/ncg 322\nI0828 13:53:35.851837       1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/ffhr 267\nI0828 13:53:36.051802       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/p8h9 415\nI0828 13:53:36.251818       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/pk67 450\nI0828 13:53:36.451814       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/2c8 465\nI0828 13:53:36.651810       1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/wlh 347\nI0828 13:53:36.851799       1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/8gsw 442\nI0828 13:53:37.051804       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/jhzq 250\nI0828 13:53:37.251809       1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/zh64 551\nI0828 13:53:37.451901       1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/2g9 328\nI0828 13:53:37.651825       1 logs_generator.go:76] 30 PUT /api/v1/namespaces/default/pods/77k 262\nI0828 13:53:37.851820       1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/xh5j 461\nI0828 13:53:38.051865       1 logs_generator.go:76] 32 GET /api/v1/namespaces/default/pods/dr9t 509\nI0828 13:53:38.251832       1 logs_generator.go:76] 33 POST /api/v1/namespaces/ns/pods/5rgm 382\nI0828 13:53:38.451869       1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/q2dr 433\nI0828 13:53:38.651835       1 logs_generator.go:76] 35 POST /api/v1/namespaces/ns/pods/wjjc 259\nI0828 13:53:38.851837       1 logs_generator.go:76] 36 POST /api/v1/namespaces/ns/pods/l4j 246\nI0828 13:53:39.051858       1 logs_generator.go:76] 37 PUT /api/v1/namespaces/ns/pods/9dr9 576\nI0828 13:53:39.251871       1 logs_generator.go:76] 38 GET /api/v1/namespaces/ns/pods/cfrt 381\nI0828 13:53:39.451821       1 logs_generator.go:76] 39 GET /api/v1/namespaces/kube-system/pods/bc7t 376\nI0828 13:53:39.651845       1 logs_generator.go:76] 40 GET /api/v1/namespaces/ns/pods/vvj 568\nI0828 13:53:39.851976       1 logs_generator.go:76] 41 POST /api/v1/namespaces/default/pods/5gc 408\nI0828 13:53:40.051877       1 logs_generator.go:76] 42 POST /api/v1/namespaces/default/pods/rf6p 460\nI0828 13:53:40.251854       1 logs_generator.go:76] 43 GET /api/v1/namespaces/ns/pods/h2tj 367\nI0828 13:53:40.451862       1 logs_generator.go:76] 44 PUT /api/v1/namespaces/default/pods/q4w8 373\nI0828 13:53:40.651835       1 logs_generator.go:76] 45 PUT /api/v1/namespaces/ns/pods/gt64 303\nI0828 13:53:40.851874       1 logs_generator.go:76] 46 POST /api/v1/namespaces/default/pods/hvmx 255\nI0828 13:53:41.051813       1 logs_generator.go:76] 47 PUT /api/v1/namespaces/default/pods/gnm 266\nI0828 13:53:41.251867       1 logs_generator.go:76] 48 GET /api/v1/namespaces/default/pods/wq9 290\nI0828 13:53:41.451858       1 logs_generator.go:76] 49 GET /api/v1/namespaces/ns/pods/qf8v 365\nI0828 13:53:41.651850       1 logs_generator.go:76] 50 GET /api/v1/namespaces/ns/pods/rzg 413\nI0828 13:53:41.851853       1 logs_generator.go:76] 51 PUT /api/v1/namespaces/kube-system/pods/nkz 570\nI0828 13:53:42.051831       1 logs_generator.go:76] 52 PUT /api/v1/namespaces/ns/pods/4r2c 595\nI0828 13:53:42.251831       1 logs_generator.go:76] 53 GET /api/v1/namespaces/ns/pods/mc7 295\nI0828 13:53:42.451836       1 logs_generator.go:76] 54 GET /api/v1/namespaces/ns/pods/2l6n 377\nI0828 13:53:42.651803       1 logs_generator.go:76] 55 GET /api/v1/namespaces/kube-system/pods/pvn 297\nI0828 13:53:42.851785       1 logs_generator.go:76] 56 POST /api/v1/namespaces/kube-system/pods/x2rr 464\nI0828 13:53:43.051831       1 logs_generator.go:76] 57 POST /api/v1/namespaces/ns/pods/8mp 379\nI0828 13:53:43.251816       1 logs_generator.go:76] 58 PUT /api/v1/namespaces/kube-system/pods/sbc 529\nI0828 13:53:43.451846       1 logs_generator.go:76] 59 POST /api/v1/namespaces/ns/pods/v7lc 520\nI0828 13:53:43.651878       1 logs_generator.go:76] 60 POST /api/v1/namespaces/ns/pods/8zg 398\nI0828 13:53:43.851848       1 logs_generator.go:76] 61 PUT /api/v1/namespaces/default/pods/zdm 343\nI0828 13:53:44.052846       1 logs_generator.go:76] 62 GET /api/v1/namespaces/default/pods/fqvk 557\nI0828 13:53:44.251976       1 logs_generator.go:76] 63 PUT /api/v1/namespaces/default/pods/8ww 571\nI0828 13:53:44.451845       1 logs_generator.go:76] 64 PUT /api/v1/namespaces/kube-system/pods/zjbm 523\nI0828 13:53:44.651827       1 logs_generator.go:76] 65 POST /api/v1/namespaces/kube-system/pods/f99p 406\nI0828 13:53:44.851814       1 logs_generator.go:76] 66 PUT /api/v1/namespaces/default/pods/7fr 559\nI0828 13:53:45.051841       1 logs_generator.go:76] 67 POST /api/v1/namespaces/kube-system/pods/c47l 318\nI0828 13:53:45.251818       1 logs_generator.go:76] 68 PUT /api/v1/namespaces/ns/pods/t2hl 576\nI0828 13:53:45.451843       1 logs_generator.go:76] 69 POST /api/v1/namespaces/ns/pods/m8z 413\nI0828 13:53:45.651824       1 logs_generator.go:76] 70 GET /api/v1/namespaces/ns/pods/xjhz 263\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Aug 28 13:53:45.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7883'
Aug 28 13:53:54.847: INFO: stderr: ""
Aug 28 13:53:54.847: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:53:54.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7883" for this suite.

• [SLOW TEST:33.495 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":114,"skipped":1715,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:53:55.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Aug 28 13:53:56.694: INFO: Waiting up to 5m0s for pod "client-containers-22f5e1b5-061f-4840-a4a2-c1f2ac83ec74" in namespace "containers-6595" to be "Succeeded or Failed"
Aug 28 13:53:56.707: INFO: Pod "client-containers-22f5e1b5-061f-4840-a4a2-c1f2ac83ec74": Phase="Pending", Reason="", readiness=false. Elapsed: 13.051153ms
Aug 28 13:53:58.814: INFO: Pod "client-containers-22f5e1b5-061f-4840-a4a2-c1f2ac83ec74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120148829s
Aug 28 13:54:01.134: INFO: Pod "client-containers-22f5e1b5-061f-4840-a4a2-c1f2ac83ec74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44025939s
Aug 28 13:54:03.139: INFO: Pod "client-containers-22f5e1b5-061f-4840-a4a2-c1f2ac83ec74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.445597448s
STEP: Saw pod success
Aug 28 13:54:03.140: INFO: Pod "client-containers-22f5e1b5-061f-4840-a4a2-c1f2ac83ec74" satisfied condition "Succeeded or Failed"
Aug 28 13:54:03.144: INFO: Trying to get logs from node kali-worker pod client-containers-22f5e1b5-061f-4840-a4a2-c1f2ac83ec74 container test-container: 
STEP: delete the pod
Aug 28 13:54:03.644: INFO: Waiting for pod client-containers-22f5e1b5-061f-4840-a4a2-c1f2ac83ec74 to disappear
Aug 28 13:54:03.806: INFO: Pod client-containers-22f5e1b5-061f-4840-a4a2-c1f2ac83ec74 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:54:03.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6595" for this suite.

• [SLOW TEST:8.968 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1727,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:54:04.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-mr82
STEP: Creating a pod to test atomic-volume-subpath
Aug 28 13:54:04.615: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mr82" in namespace "subpath-3250" to be "Succeeded or Failed"
Aug 28 13:54:04.753: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Pending", Reason="", readiness=false. Elapsed: 137.076891ms
Aug 28 13:54:07.112: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.496484678s
Aug 28 13:54:09.261: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.645138027s
Aug 28 13:54:11.506: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.890355643s
Aug 28 13:54:13.816: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Pending", Reason="", readiness=false. Elapsed: 9.2005577s
Aug 28 13:54:15.942: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Pending", Reason="", readiness=false. Elapsed: 11.326335203s
Aug 28 13:54:17.950: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Pending", Reason="", readiness=false. Elapsed: 13.334186663s
Aug 28 13:54:19.957: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 15.341860851s
Aug 28 13:54:21.964: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 17.347960221s
Aug 28 13:54:23.971: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 19.355392221s
Aug 28 13:54:26.274: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 21.657958646s
Aug 28 13:54:28.282: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 23.666036061s
Aug 28 13:54:30.361: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 25.745720671s
Aug 28 13:54:32.464: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 27.848133966s
Aug 28 13:54:34.469: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 29.853644456s
Aug 28 13:54:36.577: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 31.961787937s
Aug 28 13:54:38.709: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Running", Reason="", readiness=true. Elapsed: 34.093587725s
Aug 28 13:54:40.810: INFO: Pod "pod-subpath-test-configmap-mr82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.194825496s
STEP: Saw pod success
Aug 28 13:54:40.811: INFO: Pod "pod-subpath-test-configmap-mr82" satisfied condition "Succeeded or Failed"
Aug 28 13:54:41.080: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-mr82 container test-container-subpath-configmap-mr82: 
STEP: delete the pod
Aug 28 13:54:41.376: INFO: Waiting for pod pod-subpath-test-configmap-mr82 to disappear
Aug 28 13:54:41.397: INFO: Pod pod-subpath-test-configmap-mr82 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-mr82
Aug 28 13:54:41.397: INFO: Deleting pod "pod-subpath-test-configmap-mr82" in namespace "subpath-3250"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:54:41.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3250" for this suite.

• [SLOW TEST:37.596 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":116,"skipped":1742,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:54:41.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 28 13:54:43.137: INFO: Waiting up to 5m0s for pod "pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef" in namespace "emptydir-1286" to be "Succeeded or Failed"
Aug 28 13:54:43.913: INFO: Pod "pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef": Phase="Pending", Reason="", readiness=false. Elapsed: 776.360243ms
Aug 28 13:54:45.918: INFO: Pod "pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.781045318s
Aug 28 13:54:48.294: INFO: Pod "pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.157608328s
Aug 28 13:54:50.697: INFO: Pod "pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef": Phase="Pending", Reason="", readiness=false. Elapsed: 7.56016293s
Aug 28 13:54:52.703: INFO: Pod "pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef": Phase="Running", Reason="", readiness=true. Elapsed: 9.566720211s
Aug 28 13:54:54.850: INFO: Pod "pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.713583109s
STEP: Saw pod success
Aug 28 13:54:54.851: INFO: Pod "pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef" satisfied condition "Succeeded or Failed"
Aug 28 13:54:55.442: INFO: Trying to get logs from node kali-worker pod pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef container test-container: 
STEP: delete the pod
Aug 28 13:54:55.892: INFO: Waiting for pod pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef to disappear
Aug 28 13:54:56.400: INFO: Pod pod-ac850b8a-0b66-4cce-b3d3-c14186dcafef no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:54:56.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1286" for this suite.

• [SLOW TEST:14.805 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1771,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:54:56.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:54:58.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2263" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":118,"skipped":1826,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:54:58.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-dvrm
STEP: Creating a pod to test atomic-volume-subpath
Aug 28 13:54:59.956: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dvrm" in namespace "subpath-246" to be "Succeeded or Failed"
Aug 28 13:55:00.605: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Pending", Reason="", readiness=false. Elapsed: 649.270526ms
Aug 28 13:55:02.707: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.750772521s
Aug 28 13:55:04.713: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.756635924s
Aug 28 13:55:06.719: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 6.763010515s
Aug 28 13:55:08.775: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 8.819061739s
Aug 28 13:55:10.786: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 10.829985119s
Aug 28 13:55:12.794: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 12.838301835s
Aug 28 13:55:14.813: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 14.856694158s
Aug 28 13:55:16.821: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 16.86487255s
Aug 28 13:55:18.827: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 18.871369242s
Aug 28 13:55:20.834: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 20.877746303s
Aug 28 13:55:23.143: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 23.1869277s
Aug 28 13:55:25.186: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Running", Reason="", readiness=true. Elapsed: 25.229805071s
Aug 28 13:55:27.193: INFO: Pod "pod-subpath-test-projected-dvrm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.237097776s
STEP: Saw pod success
Aug 28 13:55:27.193: INFO: Pod "pod-subpath-test-projected-dvrm" satisfied condition "Succeeded or Failed"
Aug 28 13:55:27.199: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-dvrm container test-container-subpath-projected-dvrm: 
STEP: delete the pod
Aug 28 13:55:27.813: INFO: Waiting for pod pod-subpath-test-projected-dvrm to disappear
Aug 28 13:55:27.993: INFO: Pod pod-subpath-test-projected-dvrm no longer exists
STEP: Deleting pod pod-subpath-test-projected-dvrm
Aug 28 13:55:27.993: INFO: Deleting pod "pod-subpath-test-projected-dvrm" in namespace "subpath-246"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:55:28.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-246" for this suite.

• [SLOW TEST:29.559 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":119,"skipped":1880,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:55:28.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 28 13:55:28.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 28 13:56:50.354: INFO: >>> kubeConfig: /root/.kube/config
Aug 28 13:57:10.270: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:58:31.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8508" for this suite.

• [SLOW TEST:184.301 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":120,"skipped":1898,"failed":0}
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:58:32.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Aug 28 13:58:34.503: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 13:58:35.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-387" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":121,"skipped":1898,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 13:58:36.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 28 13:58:45.245: INFO: Pod name wrapped-volume-race-cf611478-92c1-4038-8c1e-6018faa459e0: Found 0 pods out of 5
Aug 28 13:58:50.380: INFO: Pod name wrapped-volume-race-cf611478-92c1-4038-8c1e-6018faa459e0: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-cf611478-92c1-4038-8c1e-6018faa459e0 in namespace emptydir-wrapper-3041, will wait for the garbage collector to delete the pods
Aug 28 13:59:15.319: INFO: Deleting ReplicationController wrapped-volume-race-cf611478-92c1-4038-8c1e-6018faa459e0 took: 633.611738ms
Aug 28 13:59:16.220: INFO: Terminating ReplicationController wrapped-volume-race-cf611478-92c1-4038-8c1e-6018faa459e0 pods took: 900.785706ms
STEP: Creating RC which spawns configmap-volume pods
Aug 28 13:59:38.166: INFO: Pod name wrapped-volume-race-c5546398-7715-4afb-bed3-75fb0439bc36: Found 0 pods out of 5
Aug 28 13:59:43.193: INFO: Pod name wrapped-volume-race-c5546398-7715-4afb-bed3-75fb0439bc36: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c5546398-7715-4afb-bed3-75fb0439bc36 in namespace emptydir-wrapper-3041, will wait for the garbage collector to delete the pods
Aug 28 14:00:06.611: INFO: Deleting ReplicationController wrapped-volume-race-c5546398-7715-4afb-bed3-75fb0439bc36 took: 248.940534ms
Aug 28 14:00:07.313: INFO: Terminating ReplicationController wrapped-volume-race-c5546398-7715-4afb-bed3-75fb0439bc36 pods took: 702.001431ms
STEP: Creating RC which spawns configmap-volume pods
Aug 28 14:00:30.441: INFO: Pod name wrapped-volume-race-d6d2123e-9aa7-4e39-9d7c-4c23aada2be5: Found 0 pods out of 5
Aug 28 14:00:35.504: INFO: Pod name wrapped-volume-race-d6d2123e-9aa7-4e39-9d7c-4c23aada2be5: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d6d2123e-9aa7-4e39-9d7c-4c23aada2be5 in namespace emptydir-wrapper-3041, will wait for the garbage collector to delete the pods
Aug 28 14:00:53.909: INFO: Deleting ReplicationController wrapped-volume-race-d6d2123e-9aa7-4e39-9d7c-4c23aada2be5 took: 121.66154ms
Aug 28 14:00:54.310: INFO: Terminating ReplicationController wrapped-volume-race-d6d2123e-9aa7-4e39-9d7c-4c23aada2be5 pods took: 400.810141ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:01:11.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3041" for this suite.

• [SLOW TEST:155.648 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":122,"skipped":1901,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:01:11.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:01:15.033: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:01:17.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220075, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220075, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220075, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220074, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:01:20.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220075, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220075, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220075, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220074, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:01:23.703: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220075, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220075, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220075, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220074, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:01:27.067: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:01:37.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3039" for this suite.
STEP: Destroying namespace "webhook-3039-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:31.938 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":123,"skipped":1906,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:01:43.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 28 14:02:02.359: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 28 14:02:02.379: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 28 14:02:04.379: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 28 14:02:04.388: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 28 14:02:06.379: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 28 14:02:06.518: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:02:06.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7426" for this suite.

• [SLOW TEST:23.108 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":1916,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:02:06.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Aug 28 14:02:07.279: INFO: Waiting up to 5m0s for pod "var-expansion-7090c6a6-a734-4129-954b-4ec88ad8875a" in namespace "var-expansion-9791" to be "Succeeded or Failed"
Aug 28 14:02:07.314: INFO: Pod "var-expansion-7090c6a6-a734-4129-954b-4ec88ad8875a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.714855ms
Aug 28 14:02:09.321: INFO: Pod "var-expansion-7090c6a6-a734-4129-954b-4ec88ad8875a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042204982s
Aug 28 14:02:11.591: INFO: Pod "var-expansion-7090c6a6-a734-4129-954b-4ec88ad8875a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312678879s
Aug 28 14:02:13.597: INFO: Pod "var-expansion-7090c6a6-a734-4129-954b-4ec88ad8875a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.318073327s
STEP: Saw pod success
Aug 28 14:02:13.597: INFO: Pod "var-expansion-7090c6a6-a734-4129-954b-4ec88ad8875a" satisfied condition "Succeeded or Failed"
Aug 28 14:02:13.614: INFO: Trying to get logs from node kali-worker pod var-expansion-7090c6a6-a734-4129-954b-4ec88ad8875a container dapi-container: 
STEP: delete the pod
Aug 28 14:02:13.882: INFO: Waiting for pod var-expansion-7090c6a6-a734-4129-954b-4ec88ad8875a to disappear
Aug 28 14:02:13.885: INFO: Pod var-expansion-7090c6a6-a734-4129-954b-4ec88ad8875a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:02:13.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9791" for this suite.

• [SLOW TEST:7.157 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":1925,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:02:13.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 28 14:02:14.678: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-a c02d0563-9524-424a-b3b3-0b30def2de67 1765507 0 2020-08-28 14:02:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:02:14.679: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-a c02d0563-9524-424a-b3b3-0b30def2de67 1765507 0 2020-08-28 14:02:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 28 14:02:24.688: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-a c02d0563-9524-424a-b3b3-0b30def2de67 1765547 0 2020-08-28 14:02:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:02:24.689: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-a c02d0563-9524-424a-b3b3-0b30def2de67 1765547 0 2020-08-28 14:02:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 28 14:02:34.762: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-a c02d0563-9524-424a-b3b3-0b30def2de67 1765579 0 2020-08-28 14:02:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:02:34.764: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-a c02d0563-9524-424a-b3b3-0b30def2de67 1765579 0 2020-08-28 14:02:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 28 14:02:44.776: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-a c02d0563-9524-424a-b3b3-0b30def2de67 1765606 0 2020-08-28 14:02:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:02:44.778: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-a c02d0563-9524-424a-b3b3-0b30def2de67 1765606 0 2020-08-28 14:02:14 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 28 14:02:54.788: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-b 9b9a28f1-eaca-4d71-890f-ea27f1140fc3 1765634 0 2020-08-28 14:02:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:02:54.790: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-b 9b9a28f1-eaca-4d71-890f-ea27f1140fc3 1765634 0 2020-08-28 14:02:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 28 14:03:04.800: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-b 9b9a28f1-eaca-4d71-890f-ea27f1140fc3 1765661 0 2020-08-28 14:02:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:03:04.801: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-9095 /api/v1/namespaces/watch-9095/configmaps/e2e-watch-test-configmap-b 9b9a28f1-eaca-4d71-890f-ea27f1140fc3 1765661 0 2020-08-28 14:02:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-28 14:02:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:03:14.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9095" for this suite.

• [SLOW TEST:61.595 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":126,"skipped":1997,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:03:15.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:03:16.552: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188" in namespace "projected-6222" to be "Succeeded or Failed"
Aug 28 14:03:16.674: INFO: Pod "downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188": Phase="Pending", Reason="", readiness=false. Elapsed: 121.706696ms
Aug 28 14:03:18.680: INFO: Pod "downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127373996s
Aug 28 14:03:20.819: INFO: Pod "downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265967209s
Aug 28 14:03:22.985: INFO: Pod "downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432388599s
Aug 28 14:03:25.101: INFO: Pod "downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188": Phase="Running", Reason="", readiness=true. Elapsed: 8.548435047s
Aug 28 14:03:27.111: INFO: Pod "downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.558876273s
STEP: Saw pod success
Aug 28 14:03:27.112: INFO: Pod "downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188" satisfied condition "Succeeded or Failed"
Aug 28 14:03:27.118: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188 container client-container: 
STEP: delete the pod
Aug 28 14:03:27.196: INFO: Waiting for pod downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188 to disappear
Aug 28 14:03:27.206: INFO: Pod downwardapi-volume-1d329c73-f0a7-40de-9138-28b2c0c9f188 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:03:27.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6222" for this suite.

• [SLOW TEST:11.786 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":1998,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:03:27.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Aug 28 14:03:27.482: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5775" to be "Succeeded or Failed"
Aug 28 14:03:27.487: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.812608ms
Aug 28 14:03:29.591: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108790746s
Aug 28 14:03:31.669: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186030004s
Aug 28 14:03:33.674: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191021339s
Aug 28 14:03:35.691: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.208321809s
Aug 28 14:03:37.697: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.214237701s
STEP: Saw pod success
Aug 28 14:03:37.697: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug 28 14:03:37.703: INFO: Trying to get logs from node kali-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 28 14:03:37.872: INFO: Waiting for pod pod-host-path-test to disappear
Aug 28 14:03:37.887: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:03:37.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5775" for this suite.

• [SLOW TEST:10.613 seconds]
[sig-storage] HostPath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2014,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:03:37.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-6e340cf6-5924-445a-98c8-4d166feb0db7
STEP: Creating a pod to test consume configMaps
Aug 28 14:03:38.081: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ca8d7824-05a8-4c05-b74d-7c33150b5607" in namespace "projected-7752" to be "Succeeded or Failed"
Aug 28 14:03:38.091: INFO: Pod "pod-projected-configmaps-ca8d7824-05a8-4c05-b74d-7c33150b5607": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078854ms
Aug 28 14:03:40.096: INFO: Pod "pod-projected-configmaps-ca8d7824-05a8-4c05-b74d-7c33150b5607": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015617372s
Aug 28 14:03:42.102: INFO: Pod "pod-projected-configmaps-ca8d7824-05a8-4c05-b74d-7c33150b5607": Phase="Running", Reason="", readiness=true. Elapsed: 4.021380999s
Aug 28 14:03:44.108: INFO: Pod "pod-projected-configmaps-ca8d7824-05a8-4c05-b74d-7c33150b5607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027069489s
STEP: Saw pod success
Aug 28 14:03:44.108: INFO: Pod "pod-projected-configmaps-ca8d7824-05a8-4c05-b74d-7c33150b5607" satisfied condition "Succeeded or Failed"
Aug 28 14:03:44.113: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-ca8d7824-05a8-4c05-b74d-7c33150b5607 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 28 14:03:44.148: INFO: Waiting for pod pod-projected-configmaps-ca8d7824-05a8-4c05-b74d-7c33150b5607 to disappear
Aug 28 14:03:44.157: INFO: Pod pod-projected-configmaps-ca8d7824-05a8-4c05-b74d-7c33150b5607 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:03:44.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7752" for this suite.

• [SLOW TEST:6.271 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2016,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:03:44.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:03:44.240: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb3366e4-1714-48ad-84c1-4a3736b30a4d" in namespace "projected-6009" to be "Succeeded or Failed"
Aug 28 14:03:44.253: INFO: Pod "downwardapi-volume-bb3366e4-1714-48ad-84c1-4a3736b30a4d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.816826ms
Aug 28 14:03:46.260: INFO: Pod "downwardapi-volume-bb3366e4-1714-48ad-84c1-4a3736b30a4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019082339s
Aug 28 14:03:48.266: INFO: Pod "downwardapi-volume-bb3366e4-1714-48ad-84c1-4a3736b30a4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025377291s
STEP: Saw pod success
Aug 28 14:03:48.266: INFO: Pod "downwardapi-volume-bb3366e4-1714-48ad-84c1-4a3736b30a4d" satisfied condition "Succeeded or Failed"
Aug 28 14:03:48.271: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-bb3366e4-1714-48ad-84c1-4a3736b30a4d container client-container: 
STEP: delete the pod
Aug 28 14:03:48.314: INFO: Waiting for pod downwardapi-volume-bb3366e4-1714-48ad-84c1-4a3736b30a4d to disappear
Aug 28 14:03:48.391: INFO: Pod downwardapi-volume-bb3366e4-1714-48ad-84c1-4a3736b30a4d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:03:48.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6009" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2024,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:03:48.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 28 14:03:48.818: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 28 14:03:48.863: INFO: Waiting for terminating namespaces to be deleted...
Aug 28 14:03:48.866: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 28 14:03:48.881: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 28 14:03:48.882: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 14:03:48.882: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 28 14:03:48.882: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 14:03:48.882: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 28 14:03:48.882: INFO: 	Container app ready: true, restart count 0
Aug 28 14:03:48.882: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 28 14:03:48.893: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 28 14:03:48.893: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 14:03:48.893: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 28 14:03:48.893: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 14:03:48.894: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 28 14:03:48.894: INFO: 	Container app ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Aug 28 14:03:49.082: INFO: Pod daemon-set-69cql requesting resource cpu=0m on Node kali-worker2
Aug 28 14:03:49.083: INFO: Pod daemon-set-rsfwc requesting resource cpu=0m on Node kali-worker
Aug 28 14:03:49.083: INFO: Pod kindnet-4v6sn requesting resource cpu=100m on Node kali-worker2
Aug 28 14:03:49.083: INFO: Pod kindnet-f7bnz requesting resource cpu=100m on Node kali-worker
Aug 28 14:03:49.083: INFO: Pod kube-proxy-hhbw6 requesting resource cpu=0m on Node kali-worker
Aug 28 14:03:49.083: INFO: Pod kube-proxy-m77qg requesting resource cpu=0m on Node kali-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Aug 28 14:03:49.083: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
Aug 28 14:03:49.113: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0c6f5e60-86b9-4da3-96a0-fbc30e4a4aa2.162f7389f7e8e193], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1662/filler-pod-0c6f5e60-86b9-4da3-96a0-fbc30e4a4aa2 to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0c6f5e60-86b9-4da3-96a0-fbc30e4a4aa2.162f738b19922bfe], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0c6f5e60-86b9-4da3-96a0-fbc30e4a4aa2.162f738c12a93b9e], Reason = [Created], Message = [Created container filler-pod-0c6f5e60-86b9-4da3-96a0-fbc30e4a4aa2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0c6f5e60-86b9-4da3-96a0-fbc30e4a4aa2.162f738c49178d06], Reason = [Started], Message = [Started container filler-pod-0c6f5e60-86b9-4da3-96a0-fbc30e4a4aa2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bda35b6f-a1b0-4c11-8bbf-cf04b742395f.162f7389f7257d71], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1662/filler-pod-bda35b6f-a1b0-4c11-8bbf-cf04b742395f to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bda35b6f-a1b0-4c11-8bbf-cf04b742395f.162f738a4e5dcb1e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bda35b6f-a1b0-4c11-8bbf-cf04b742395f.162f738b73cfd2cf], Reason = [Created], Message = [Created container filler-pod-bda35b6f-a1b0-4c11-8bbf-cf04b742395f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bda35b6f-a1b0-4c11-8bbf-cf04b742395f.162f738bbf55f5d4], Reason = [Started], Message = [Started container filler-pod-bda35b6f-a1b0-4c11-8bbf-cf04b742395f]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162f738cd326c45d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:04:02.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1662" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:14.043 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":131,"skipped":2033,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:04:02.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 28 14:04:06.858: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:04:06.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4221" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2146,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:04:06.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:04:14.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4451" for this suite.
STEP: Destroying namespace "nsdeletetest-3697" for this suite.
Aug 28 14:04:14.696: INFO: Namespace nsdeletetest-3697 was already deleted
STEP: Destroying namespace "nsdeletetest-8668" for this suite.

• [SLOW TEST:7.757 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":133,"skipped":2154,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:04:14.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:04:14.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a195b47e-4fa2-43dc-b505-80195cb3775c" in namespace "downward-api-1827" to be "Succeeded or Failed"
Aug 28 14:04:14.909: INFO: Pod "downwardapi-volume-a195b47e-4fa2-43dc-b505-80195cb3775c": Phase="Pending", Reason="", readiness=false. Elapsed: 53.452585ms
Aug 28 14:04:17.048: INFO: Pod "downwardapi-volume-a195b47e-4fa2-43dc-b505-80195cb3775c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192535833s
Aug 28 14:04:19.056: INFO: Pod "downwardapi-volume-a195b47e-4fa2-43dc-b505-80195cb3775c": Phase="Running", Reason="", readiness=true. Elapsed: 4.200424628s
Aug 28 14:04:21.064: INFO: Pod "downwardapi-volume-a195b47e-4fa2-43dc-b505-80195cb3775c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.208418806s
STEP: Saw pod success
Aug 28 14:04:21.065: INFO: Pod "downwardapi-volume-a195b47e-4fa2-43dc-b505-80195cb3775c" satisfied condition "Succeeded or Failed"
Aug 28 14:04:21.069: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a195b47e-4fa2-43dc-b505-80195cb3775c container client-container: 
STEP: delete the pod
Aug 28 14:04:21.305: INFO: Waiting for pod downwardapi-volume-a195b47e-4fa2-43dc-b505-80195cb3775c to disappear
Aug 28 14:04:21.330: INFO: Pod downwardapi-volume-a195b47e-4fa2-43dc-b505-80195cb3775c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:04:21.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1827" for this suite.

• [SLOW TEST:6.640 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2168,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:04:21.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 28 14:04:33.538: INFO: Successfully updated pod "annotationupdate078a3cc8-2060-4ddc-bb9c-01b8147dfa8a"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:04:36.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2425" for this suite.

• [SLOW TEST:15.880 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2188,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:04:37.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-5ca884b9-47d2-4a99-8808-c8add266f826
STEP: Creating a pod to test consume secrets
Aug 28 14:04:39.493: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471" in namespace "projected-779" to be "Succeeded or Failed"
Aug 28 14:04:39.574: INFO: Pod "pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471": Phase="Pending", Reason="", readiness=false. Elapsed: 80.951574ms
Aug 28 14:04:41.603: INFO: Pod "pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109755134s
Aug 28 14:04:43.884: INFO: Pod "pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391388603s
Aug 28 14:04:45.925: INFO: Pod "pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432111915s
Aug 28 14:04:48.380: INFO: Pod "pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.887021195s
STEP: Saw pod success
Aug 28 14:04:48.380: INFO: Pod "pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471" satisfied condition "Succeeded or Failed"
Aug 28 14:04:48.586: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471 container secret-volume-test: 
STEP: delete the pod
Aug 28 14:04:49.023: INFO: Waiting for pod pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471 to disappear
Aug 28 14:04:49.065: INFO: Pod pod-projected-secrets-7690e527-2eab-4593-9b87-f0e91ecde471 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:04:49.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-779" for this suite.

• [SLOW TEST:11.855 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2215,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:04:49.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:04:49.850: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ec658d5-0471-4bac-96e0-26ec3c4f1796" in namespace "downward-api-8794" to be "Succeeded or Failed"
Aug 28 14:04:49.943: INFO: Pod "downwardapi-volume-2ec658d5-0471-4bac-96e0-26ec3c4f1796": Phase="Pending", Reason="", readiness=false. Elapsed: 92.996093ms
Aug 28 14:04:52.458: INFO: Pod "downwardapi-volume-2ec658d5-0471-4bac-96e0-26ec3c4f1796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608243205s
Aug 28 14:04:54.464: INFO: Pod "downwardapi-volume-2ec658d5-0471-4bac-96e0-26ec3c4f1796": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613928362s
Aug 28 14:04:56.484: INFO: Pod "downwardapi-volume-2ec658d5-0471-4bac-96e0-26ec3c4f1796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.633637438s
STEP: Saw pod success
Aug 28 14:04:56.484: INFO: Pod "downwardapi-volume-2ec658d5-0471-4bac-96e0-26ec3c4f1796" satisfied condition "Succeeded or Failed"
Aug 28 14:04:56.722: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-2ec658d5-0471-4bac-96e0-26ec3c4f1796 container client-container: 
STEP: delete the pod
Aug 28 14:04:57.220: INFO: Waiting for pod downwardapi-volume-2ec658d5-0471-4bac-96e0-26ec3c4f1796 to disappear
Aug 28 14:04:57.231: INFO: Pod downwardapi-volume-2ec658d5-0471-4bac-96e0-26ec3c4f1796 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:04:57.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8794" for this suite.

• [SLOW TEST:8.158 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2257,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:04:57.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-aebb40d5-82db-441f-bb03-ddbbc297d57d
STEP: Creating a pod to test consume secrets
Aug 28 14:04:58.456: INFO: Waiting up to 5m0s for pod "pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3" in namespace "secrets-5522" to be "Succeeded or Failed"
Aug 28 14:04:58.844: INFO: Pod "pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 388.021097ms
Aug 28 14:05:00.851: INFO: Pod "pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39510775s
Aug 28 14:05:03.137: INFO: Pod "pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.680648015s
Aug 28 14:05:05.464: INFO: Pod "pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.00857516s
Aug 28 14:05:07.494: INFO: Pod "pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.038359673s
Aug 28 14:05:09.692: INFO: Pod "pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3": Phase="Running", Reason="", readiness=true. Elapsed: 11.236231611s
Aug 28 14:05:11.711: INFO: Pod "pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.255087673s
STEP: Saw pod success
Aug 28 14:05:11.711: INFO: Pod "pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3" satisfied condition "Succeeded or Failed"
Aug 28 14:05:11.716: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3 container secret-volume-test: 
STEP: delete the pod
Aug 28 14:05:12.881: INFO: Waiting for pod pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3 to disappear
Aug 28 14:05:12.928: INFO: Pod pod-secrets-3a6746c2-981d-4836-81c2-3c6ff2bbabf3 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:05:12.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5522" for this suite.

• [SLOW TEST:16.026 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2274,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:05:13.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-958
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug 28 14:05:14.395: INFO: Found 0 stateful pods, waiting for 3
Aug 28 14:05:24.491: INFO: Found 2 stateful pods, waiting for 3
Aug 28 14:05:34.406: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 14:05:34.406: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 14:05:34.406: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 28 14:05:44.407: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 14:05:44.407: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 14:05:44.407: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 14:05:44.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-958 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 14:06:00.561: INFO: stderr: "I0828 14:06:00.188335    3018 log.go:172] (0x40000f2420) (0x4000550a00) Create stream\nI0828 14:06:00.195828    3018 log.go:172] (0x40000f2420) (0x4000550a00) Stream added, broadcasting: 1\nI0828 14:06:00.210468    3018 log.go:172] (0x40000f2420) Reply frame received for 1\nI0828 14:06:00.211260    3018 log.go:172] (0x40000f2420) (0x40007fd220) Create stream\nI0828 14:06:00.211333    3018 log.go:172] (0x40000f2420) (0x40007fd220) Stream added, broadcasting: 3\nI0828 14:06:00.213166    3018 log.go:172] (0x40000f2420) Reply frame received for 3\nI0828 14:06:00.213601    3018 log.go:172] (0x40000f2420) (0x4000550aa0) Create stream\nI0828 14:06:00.213700    3018 log.go:172] (0x40000f2420) (0x4000550aa0) Stream added, broadcasting: 5\nI0828 14:06:00.215697    3018 log.go:172] (0x40000f2420) Reply frame received for 5\nI0828 14:06:00.313102    3018 log.go:172] (0x40000f2420) Data frame received for 5\nI0828 14:06:00.313320    3018 log.go:172] (0x4000550aa0) (5) Data frame handling\nI0828 14:06:00.313817    3018 log.go:172] (0x4000550aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 14:06:00.530571    3018 log.go:172] (0x40000f2420) Data frame received for 3\nI0828 14:06:00.530774    3018 log.go:172] (0x40007fd220) (3) Data frame handling\nI0828 14:06:00.530910    3018 log.go:172] (0x40007fd220) (3) Data frame sent\nI0828 14:06:00.531819    3018 log.go:172] (0x40000f2420) Data frame received for 5\nI0828 14:06:00.531943    3018 log.go:172] (0x4000550aa0) (5) Data frame handling\nI0828 14:06:00.532542    3018 log.go:172] (0x40000f2420) Data frame received for 3\nI0828 14:06:00.533178    3018 log.go:172] (0x40007fd220) (3) Data frame handling\nI0828 14:06:00.534207    3018 log.go:172] (0x40000f2420) Data frame received for 1\nI0828 14:06:00.534386    3018 log.go:172] (0x4000550a00) (1) Data frame handling\nI0828 14:06:00.534578    3018 log.go:172] (0x4000550a00) (1) Data frame sent\nI0828 14:06:00.535664    3018 log.go:172] (0x40000f2420) (0x4000550a00) Stream removed, broadcasting: 1\nI0828 14:06:00.538327    3018 log.go:172] (0x40000f2420) Go away received\nI0828 14:06:00.542301    3018 log.go:172] (0x40000f2420) (0x4000550a00) Stream removed, broadcasting: 1\nI0828 14:06:00.542574    3018 log.go:172] (0x40000f2420) (0x40007fd220) Stream removed, broadcasting: 3\nI0828 14:06:00.542749    3018 log.go:172] (0x40000f2420) (0x4000550aa0) Stream removed, broadcasting: 5\n"
Aug 28 14:06:00.562: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 14:06:00.562: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 28 14:06:10.603: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 28 14:06:20.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-958 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:06:22.236: INFO: stderr: "I0828 14:06:22.149482    3049 log.go:172] (0x40000e6420) (0x4000ad6140) Create stream\nI0828 14:06:22.151947    3049 log.go:172] (0x40000e6420) (0x4000ad6140) Stream added, broadcasting: 1\nI0828 14:06:22.160619    3049 log.go:172] (0x40000e6420) Reply frame received for 1\nI0828 14:06:22.161385    3049 log.go:172] (0x40000e6420) (0x4000912000) Create stream\nI0828 14:06:22.161467    3049 log.go:172] (0x40000e6420) (0x4000912000) Stream added, broadcasting: 3\nI0828 14:06:22.163377    3049 log.go:172] (0x40000e6420) Reply frame received for 3\nI0828 14:06:22.163659    3049 log.go:172] (0x40000e6420) (0x4000ad61e0) Create stream\nI0828 14:06:22.163720    3049 log.go:172] (0x40000e6420) (0x4000ad61e0) Stream added, broadcasting: 5\nI0828 14:06:22.164711    3049 log.go:172] (0x40000e6420) Reply frame received for 5\nI0828 14:06:22.221003    3049 log.go:172] (0x40000e6420) Data frame received for 5\nI0828 14:06:22.221189    3049 log.go:172] (0x40000e6420) Data frame received for 3\nI0828 14:06:22.221248    3049 log.go:172] (0x4000912000) (3) Data frame handling\nI0828 14:06:22.221336    3049 log.go:172] (0x4000ad61e0) (5) Data frame handling\nI0828 14:06:22.221421    3049 log.go:172] (0x40000e6420) Data frame received for 1\nI0828 14:06:22.221471    3049 log.go:172] (0x4000ad6140) (1) Data frame handling\nI0828 14:06:22.221941    3049 log.go:172] (0x4000912000) (3) Data frame sent\nI0828 14:06:22.222014    3049 log.go:172] (0x4000ad61e0) (5) Data frame sent\nI0828 14:06:22.222098    3049 log.go:172] (0x40000e6420) Data frame received for 3\nI0828 14:06:22.222149    3049 log.go:172] (0x4000912000) (3) Data frame handling\nI0828 14:06:22.222235    3049 log.go:172] (0x4000ad6140) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 14:06:22.223120    3049 log.go:172] (0x40000e6420) Data frame received for 5\nI0828 14:06:22.223708    3049 log.go:172] (0x40000e6420) (0x4000ad6140) Stream removed, broadcasting: 1\nI0828 14:06:22.224459    3049 log.go:172] (0x4000ad61e0) (5) Data frame handling\nI0828 14:06:22.225341    3049 log.go:172] (0x40000e6420) Go away received\nI0828 14:06:22.227586    3049 log.go:172] (0x40000e6420) (0x4000ad6140) Stream removed, broadcasting: 1\nI0828 14:06:22.227783    3049 log.go:172] (0x40000e6420) (0x4000912000) Stream removed, broadcasting: 3\nI0828 14:06:22.227913    3049 log.go:172] (0x40000e6420) (0x4000ad61e0) Stream removed, broadcasting: 5\n"
Aug 28 14:06:22.237: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 28 14:06:22.237: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 28 14:06:32.271: INFO: Waiting for StatefulSet statefulset-958/ss2 to complete update
Aug 28 14:06:32.272: INFO: Waiting for Pod statefulset-958/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 28 14:06:32.272: INFO: Waiting for Pod statefulset-958/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 28 14:06:32.272: INFO: Waiting for Pod statefulset-958/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 28 14:06:42.283: INFO: Waiting for StatefulSet statefulset-958/ss2 to complete update
Aug 28 14:06:42.283: INFO: Waiting for Pod statefulset-958/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 28 14:06:42.283: INFO: Waiting for Pod statefulset-958/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 28 14:06:52.411: INFO: Waiting for StatefulSet statefulset-958/ss2 to complete update
Aug 28 14:06:52.412: INFO: Waiting for Pod statefulset-958/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 28 14:07:02.281: INFO: Waiting for StatefulSet statefulset-958/ss2 to complete update
Aug 28 14:07:02.281: INFO: Waiting for Pod statefulset-958/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 28 14:07:12.443: INFO: Waiting for StatefulSet statefulset-958/ss2 to complete update
STEP: Rolling back to a previous revision
Aug 28 14:07:22.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-958 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 14:07:25.367: INFO: stderr: "I0828 14:07:23.655963    3072 log.go:172] (0x400082cc60) (0x40007d9900) Create stream\nI0828 14:07:23.660321    3072 log.go:172] (0x400082cc60) (0x40007d9900) Stream added, broadcasting: 1\nI0828 14:07:23.671405    3072 log.go:172] (0x400082cc60) Reply frame received for 1\nI0828 14:07:23.672080    3072 log.go:172] (0x400082cc60) (0x40007d99a0) Create stream\nI0828 14:07:23.672130    3072 log.go:172] (0x400082cc60) (0x40007d99a0) Stream added, broadcasting: 3\nI0828 14:07:23.674062    3072 log.go:172] (0x400082cc60) Reply frame received for 3\nI0828 14:07:23.674684    3072 log.go:172] (0x400082cc60) (0x4000900000) Create stream\nI0828 14:07:23.674766    3072 log.go:172] (0x400082cc60) (0x4000900000) Stream added, broadcasting: 5\nI0828 14:07:23.676015    3072 log.go:172] (0x400082cc60) Reply frame received for 5\nI0828 14:07:23.758489    3072 log.go:172] (0x400082cc60) Data frame received for 5\nI0828 14:07:23.758718    3072 log.go:172] (0x4000900000) (5) Data frame handling\nI0828 14:07:23.759295    3072 log.go:172] (0x4000900000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 14:07:23.782118    3072 log.go:172] (0x400082cc60) Data frame received for 3\nI0828 14:07:23.782218    3072 log.go:172] (0x40007d99a0) (3) Data frame handling\nI0828 14:07:23.782346    3072 log.go:172] (0x400082cc60) Data frame received for 5\nI0828 14:07:23.782478    3072 log.go:172] (0x4000900000) (5) Data frame handling\nI0828 14:07:23.782563    3072 log.go:172] (0x40007d99a0) (3) Data frame sent\nI0828 14:07:23.782726    3072 log.go:172] (0x400082cc60) Data frame received for 3\nI0828 14:07:23.782856    3072 log.go:172] (0x40007d99a0) (3) Data frame handling\nI0828 14:07:23.784654    3072 log.go:172] (0x400082cc60) Data frame received for 1\nI0828 14:07:23.784832    3072 log.go:172] (0x40007d9900) (1) Data frame handling\nI0828 14:07:23.784960    3072 log.go:172] (0x40007d9900) (1) Data frame sent\nI0828 14:07:25.348013    3072 log.go:172] (0x400082cc60) (0x40007d9900) Stream removed, broadcasting: 1\nI0828 14:07:25.351198    3072 log.go:172] (0x400082cc60) Go away received\nI0828 14:07:25.357015    3072 log.go:172] (0x400082cc60) (0x40007d9900) Stream removed, broadcasting: 1\nI0828 14:07:25.357531    3072 log.go:172] (0x400082cc60) (0x40007d99a0) Stream removed, broadcasting: 3\nI0828 14:07:25.357819    3072 log.go:172] (0x400082cc60) (0x4000900000) Stream removed, broadcasting: 5\n"
Aug 28 14:07:25.369: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 14:07:25.369: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 28 14:07:25.398: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 28 14:07:35.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-958 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:07:37.327: INFO: stderr: "I0828 14:07:37.215030    3094 log.go:172] (0x400003a630) (0x40007e15e0) Create stream\nI0828 14:07:37.220409    3094 log.go:172] (0x400003a630) (0x40007e15e0) Stream added, broadcasting: 1\nI0828 14:07:37.231612    3094 log.go:172] (0x400003a630) Reply frame received for 1\nI0828 14:07:37.232376    3094 log.go:172] (0x400003a630) (0x40008fe140) Create stream\nI0828 14:07:37.232448    3094 log.go:172] (0x400003a630) (0x40008fe140) Stream added, broadcasting: 3\nI0828 14:07:37.234087    3094 log.go:172] (0x400003a630) Reply frame received for 3\nI0828 14:07:37.234375    3094 log.go:172] (0x400003a630) (0x40007ab7c0) Create stream\nI0828 14:07:37.234470    3094 log.go:172] (0x400003a630) (0x40007ab7c0) Stream added, broadcasting: 5\nI0828 14:07:37.235967    3094 log.go:172] (0x400003a630) Reply frame received for 5\nI0828 14:07:37.304579    3094 log.go:172] (0x400003a630) Data frame received for 5\nI0828 14:07:37.305055    3094 log.go:172] (0x400003a630) Data frame received for 3\nI0828 14:07:37.305241    3094 log.go:172] (0x400003a630) Data frame received for 1\nI0828 14:07:37.305346    3094 log.go:172] (0x40007e15e0) (1) Data frame handling\nI0828 14:07:37.305435    3094 log.go:172] (0x40007ab7c0) (5) Data frame handling\nI0828 14:07:37.305714    3094 log.go:172] (0x40008fe140) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 14:07:37.306746    3094 log.go:172] (0x40007ab7c0) (5) Data frame sent\nI0828 14:07:37.306900    3094 log.go:172] (0x40007e15e0) (1) Data frame sent\nI0828 14:07:37.307104    3094 log.go:172] (0x400003a630) Data frame received for 5\nI0828 14:07:37.307176    3094 log.go:172] (0x40007ab7c0) (5) Data frame handling\nI0828 14:07:37.307297    3094 log.go:172] (0x40008fe140) (3) Data frame sent\nI0828 14:07:37.307410    3094 log.go:172] (0x400003a630) Data frame received for 3\nI0828 14:07:37.307538    3094 log.go:172] (0x40008fe140) (3) Data frame handling\nI0828 14:07:37.308807    3094 log.go:172] (0x400003a630) (0x40007e15e0) Stream removed, broadcasting: 1\nI0828 14:07:37.310986    3094 log.go:172] (0x400003a630) Go away received\nI0828 14:07:37.313994    3094 log.go:172] (0x400003a630) (0x40007e15e0) Stream removed, broadcasting: 1\nI0828 14:07:37.314239    3094 log.go:172] (0x400003a630) (0x40008fe140) Stream removed, broadcasting: 3\nI0828 14:07:37.314408    3094 log.go:172] (0x400003a630) (0x40007ab7c0) Stream removed, broadcasting: 5\n"
Aug 28 14:07:37.327: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 28 14:07:37.327: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 28 14:07:47.763: INFO: Waiting for StatefulSet statefulset-958/ss2 to complete update
Aug 28 14:07:47.763: INFO: Waiting for Pod statefulset-958/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 28 14:07:47.763: INFO: Waiting for Pod statefulset-958/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 28 14:07:58.256: INFO: Waiting for StatefulSet statefulset-958/ss2 to complete update
Aug 28 14:07:58.257: INFO: Waiting for Pod statefulset-958/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 28 14:08:07.777: INFO: Waiting for StatefulSet statefulset-958/ss2 to complete update
Aug 28 14:08:07.777: INFO: Waiting for Pod statefulset-958/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 28 14:08:18.169: INFO: Deleting all statefulset in ns statefulset-958
Aug 28 14:08:18.172: INFO: Scaling statefulset ss2 to 0
Aug 28 14:08:48.368: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 14:08:48.373: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:08:48.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-958" for this suite.

• [SLOW TEST:215.175 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":139,"skipped":2280,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:08:48.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 28 14:08:49.721: INFO: Waiting up to 5m0s for pod "pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f" in namespace "emptydir-3438" to be "Succeeded or Failed"
Aug 28 14:08:49.915: INFO: Pod "pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f": Phase="Pending", Reason="", readiness=false. Elapsed: 193.516094ms
Aug 28 14:08:51.951: INFO: Pod "pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229460616s
Aug 28 14:08:54.300: INFO: Pod "pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.578314898s
Aug 28 14:08:56.611: INFO: Pod "pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.889982711s
Aug 28 14:08:59.081: INFO: Pod "pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.359944s
STEP: Saw pod success
Aug 28 14:08:59.082: INFO: Pod "pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f" satisfied condition "Succeeded or Failed"
Aug 28 14:08:59.500: INFO: Trying to get logs from node kali-worker2 pod pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f container test-container: 
STEP: delete the pod
Aug 28 14:09:00.173: INFO: Waiting for pod pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f to disappear
Aug 28 14:09:00.319: INFO: Pod pod-3eaa5b87-76a9-405f-96fd-3c541b8cf30f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:09:00.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3438" for this suite.

• [SLOW TEST:11.926 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2280,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:09:00.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9239.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9239.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9239.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9239.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 28 14:09:17.135: INFO: DNS probes using dns-test-ff1c399b-ab97-4a9b-b3a3-5df3403eb3af succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9239.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9239.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9239.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9239.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 28 14:09:38.482: INFO: File wheezy_udp@dns-test-service-3.dns-9239.svc.cluster.local from pod  dns-9239/dns-test-842459b3-2248-4916-88b5-639dba413031 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 28 14:09:38.487: INFO: File jessie_udp@dns-test-service-3.dns-9239.svc.cluster.local from pod  dns-9239/dns-test-842459b3-2248-4916-88b5-639dba413031 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 28 14:09:38.487: INFO: Lookups using dns-9239/dns-test-842459b3-2248-4916-88b5-639dba413031 failed for: [wheezy_udp@dns-test-service-3.dns-9239.svc.cluster.local jessie_udp@dns-test-service-3.dns-9239.svc.cluster.local]

Aug 28 14:09:43.499: INFO: DNS probes using dns-test-842459b3-2248-4916-88b5-639dba413031 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9239.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9239.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9239.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9239.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 28 14:09:59.772: INFO: DNS probes using dns-test-401a6320-b7ea-4b68-b769-73174f49dc1d succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:10:00.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9239" for this suite.

• [SLOW TEST:60.438 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":141,"skipped":2283,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:10:00.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 28 14:10:01.832: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5287 /api/v1/namespaces/watch-5287/configmaps/e2e-watch-test-watch-closed 9499076b-4d0d-44fe-8bbc-942ec449fb51 1767595 0 2020-08-28 14:10:01 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:10:01.833: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5287 /api/v1/namespaces/watch-5287/configmaps/e2e-watch-test-watch-closed 9499076b-4d0d-44fe-8bbc-942ec449fb51 1767597 0 2020-08-28 14:10:01 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 28 14:10:02.786: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5287 /api/v1/namespaces/watch-5287/configmaps/e2e-watch-test-watch-closed 9499076b-4d0d-44fe-8bbc-942ec449fb51 1767601 0 2020-08-28 14:10:01 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:10:02.788: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5287 /api/v1/namespaces/watch-5287/configmaps/e2e-watch-test-watch-closed 9499076b-4d0d-44fe-8bbc-942ec449fb51 1767603 0 2020-08-28 14:10:01 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:10:02.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5287" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":142,"skipped":2289,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:10:03.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Aug 28 14:10:03.652: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix215168250/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:10:04.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7861" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":143,"skipped":2292,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:10:04.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:10:19.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1801" for this suite.

• [SLOW TEST:14.408 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":144,"skipped":2299,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:10:19.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug 28 14:10:19.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5792'
Aug 28 14:10:23.072: INFO: stderr: ""
Aug 28 14:10:23.072: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 28 14:10:24.985: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:24.985: INFO: Found 0 / 1
Aug 28 14:10:25.457: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:25.457: INFO: Found 0 / 1
Aug 28 14:10:26.297: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:26.297: INFO: Found 0 / 1
Aug 28 14:10:27.142: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:27.142: INFO: Found 0 / 1
Aug 28 14:10:28.342: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:28.342: INFO: Found 0 / 1
Aug 28 14:10:29.349: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:29.349: INFO: Found 0 / 1
Aug 28 14:10:30.231: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:30.231: INFO: Found 0 / 1
Aug 28 14:10:31.179: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:31.179: INFO: Found 0 / 1
Aug 28 14:10:32.241: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:32.241: INFO: Found 0 / 1
Aug 28 14:10:33.267: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:33.267: INFO: Found 0 / 1
Aug 28 14:10:34.102: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:34.103: INFO: Found 1 / 1
Aug 28 14:10:34.103: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 28 14:10:34.107: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:34.108: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 28 14:10:34.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config patch pod agnhost-master-m6mqt --namespace=kubectl-5792 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 28 14:10:35.438: INFO: stderr: ""
Aug 28 14:10:35.438: INFO: stdout: "pod/agnhost-master-m6mqt patched\n"
STEP: checking annotations
Aug 28 14:10:35.452: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 28 14:10:35.452: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:10:35.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5792" for this suite.

• [SLOW TEST:16.107 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":145,"skipped":2326,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:10:35.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 28 14:10:35.604: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2944 /api/v1/namespaces/watch-2944/configmaps/e2e-watch-test-label-changed b82d2187-11c7-4b85-b99e-886651c71d2d 1767790 0 2020-08-28 14:10:35 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:10:35.606: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2944 /api/v1/namespaces/watch-2944/configmaps/e2e-watch-test-label-changed b82d2187-11c7-4b85-b99e-886651c71d2d 1767791 0 2020-08-28 14:10:35 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:10:35.606: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2944 /api/v1/namespaces/watch-2944/configmaps/e2e-watch-test-label-changed b82d2187-11c7-4b85-b99e-886651c71d2d 1767792 0 2020-08-28 14:10:35 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 28 14:10:45.807: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2944 /api/v1/namespaces/watch-2944/configmaps/e2e-watch-test-label-changed b82d2187-11c7-4b85-b99e-886651c71d2d 1767832 0 2020-08-28 14:10:35 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:10:45.809: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2944 /api/v1/namespaces/watch-2944/configmaps/e2e-watch-test-label-changed b82d2187-11c7-4b85-b99e-886651c71d2d 1767833 0 2020-08-28 14:10:35 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:10:45.809: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2944 /api/v1/namespaces/watch-2944/configmaps/e2e-watch-test-label-changed b82d2187-11c7-4b85-b99e-886651c71d2d 1767834 0 2020-08-28 14:10:35 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-28 14:10:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:10:45.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2944" for this suite.

• [SLOW TEST:10.371 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":146,"skipped":2354,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:10:45.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:10:49.393: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:10:52.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220649, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220649, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220649, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220649, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:10:54.135: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220649, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220649, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220649, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220649, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:10:57.199: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:10:57.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5188" for this suite.
STEP: Destroying namespace "webhook-5188-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.527 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":147,"skipped":2354,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:10:57.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-9c7da5f1-c6da-4bb8-b9c8-0dd289cb5f87
STEP: Creating secret with name s-test-opt-upd-488c07c7-227b-4bd5-8caa-eb77818e18bb
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9c7da5f1-c6da-4bb8-b9c8-0dd289cb5f87
STEP: Updating secret s-test-opt-upd-488c07c7-227b-4bd5-8caa-eb77818e18bb
STEP: Creating secret with name s-test-opt-create-d49a1adc-141f-457e-bf10-3b99cd71bebb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:11:09.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4676" for this suite.

• [SLOW TEST:12.490 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2374,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:11:09.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-93198fce-53b7-46b8-8de1-9674c1046b0d
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-93198fce-53b7-46b8-8de1-9674c1046b0d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:11:16.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-423" for this suite.

• [SLOW TEST:7.099 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2385,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:11:16.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 28 14:11:17.586: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6806 /api/v1/namespaces/watch-6806/configmaps/e2e-watch-test-resource-version ad6bd4bd-467a-4ff1-a5ca-1056295a9718 1768064 0 2020-08-28 14:11:17 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-28 14:11:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 28 14:11:17.587: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6806 /api/v1/namespaces/watch-6806/configmaps/e2e-watch-test-resource-version ad6bd4bd-467a-4ff1-a5ca-1056295a9718 1768065 0 2020-08-28 14:11:17 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-28 14:11:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:11:17.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6806" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":150,"skipped":2392,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:11:17.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:11:17.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2856" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":151,"skipped":2415,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:11:17.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 28 14:11:17.993: INFO: Waiting up to 5m0s for pod "downward-api-e45545a3-f5a4-4dd4-9107-c5918c59a45c" in namespace "downward-api-8114" to be "Succeeded or Failed"
Aug 28 14:11:18.002: INFO: Pod "downward-api-e45545a3-f5a4-4dd4-9107-c5918c59a45c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.679636ms
Aug 28 14:11:20.009: INFO: Pod "downward-api-e45545a3-f5a4-4dd4-9107-c5918c59a45c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016435831s
Aug 28 14:11:22.497: INFO: Pod "downward-api-e45545a3-f5a4-4dd4-9107-c5918c59a45c": Phase="Running", Reason="", readiness=true. Elapsed: 4.504318088s
Aug 28 14:11:24.820: INFO: Pod "downward-api-e45545a3-f5a4-4dd4-9107-c5918c59a45c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.827384892s
STEP: Saw pod success
Aug 28 14:11:24.821: INFO: Pod "downward-api-e45545a3-f5a4-4dd4-9107-c5918c59a45c" satisfied condition "Succeeded or Failed"
Aug 28 14:11:24.962: INFO: Trying to get logs from node kali-worker2 pod downward-api-e45545a3-f5a4-4dd4-9107-c5918c59a45c container dapi-container: 
STEP: delete the pod
Aug 28 14:11:25.096: INFO: Waiting for pod downward-api-e45545a3-f5a4-4dd4-9107-c5918c59a45c to disappear
Aug 28 14:11:25.102: INFO: Pod downward-api-e45545a3-f5a4-4dd4-9107-c5918c59a45c no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:11:25.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8114" for this suite.

• [SLOW TEST:7.228 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2417,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:11:25.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 28 14:11:25.186: INFO: Waiting up to 5m0s for pod "pod-7a931a60-b02b-4023-9019-437e74f19281" in namespace "emptydir-4144" to be "Succeeded or Failed"
Aug 28 14:11:25.260: INFO: Pod "pod-7a931a60-b02b-4023-9019-437e74f19281": Phase="Pending", Reason="", readiness=false. Elapsed: 74.151299ms
Aug 28 14:11:27.297: INFO: Pod "pod-7a931a60-b02b-4023-9019-437e74f19281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11123159s
Aug 28 14:11:29.340: INFO: Pod "pod-7a931a60-b02b-4023-9019-437e74f19281": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153611634s
Aug 28 14:11:31.346: INFO: Pod "pod-7a931a60-b02b-4023-9019-437e74f19281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160210567s
STEP: Saw pod success
Aug 28 14:11:31.347: INFO: Pod "pod-7a931a60-b02b-4023-9019-437e74f19281" satisfied condition "Succeeded or Failed"
Aug 28 14:11:31.351: INFO: Trying to get logs from node kali-worker2 pod pod-7a931a60-b02b-4023-9019-437e74f19281 container test-container: 
STEP: delete the pod
Aug 28 14:11:31.391: INFO: Waiting for pod pod-7a931a60-b02b-4023-9019-437e74f19281 to disappear
Aug 28 14:11:31.405: INFO: Pod pod-7a931a60-b02b-4023-9019-437e74f19281 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:11:31.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4144" for this suite.

• [SLOW TEST:6.300 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2439,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:11:31.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-1486
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1486
STEP: Deleting pre-stop pod
Aug 28 14:11:44.654: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:11:44.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1486" for this suite.

• [SLOW TEST:13.291 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":154,"skipped":2449,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:11:44.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-0b91f95b-6f98-463a-b25b-c1d1f2a3b73e
STEP: Creating a pod to test consume secrets
Aug 28 14:11:45.572: INFO: Waiting up to 5m0s for pod "pod-secrets-f2bf1f67-1847-4ea6-a74b-6afbef284a2b" in namespace "secrets-6345" to be "Succeeded or Failed"
Aug 28 14:11:45.574: INFO: Pod "pod-secrets-f2bf1f67-1847-4ea6-a74b-6afbef284a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307603ms
Aug 28 14:11:48.040: INFO: Pod "pod-secrets-f2bf1f67-1847-4ea6-a74b-6afbef284a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467798048s
Aug 28 14:11:50.114: INFO: Pod "pod-secrets-f2bf1f67-1847-4ea6-a74b-6afbef284a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.541657718s
Aug 28 14:11:52.217: INFO: Pod "pod-secrets-f2bf1f67-1847-4ea6-a74b-6afbef284a2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.645393053s
STEP: Saw pod success
Aug 28 14:11:52.218: INFO: Pod "pod-secrets-f2bf1f67-1847-4ea6-a74b-6afbef284a2b" satisfied condition "Succeeded or Failed"
Aug 28 14:11:52.221: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-f2bf1f67-1847-4ea6-a74b-6afbef284a2b container secret-volume-test: 
STEP: delete the pod
Aug 28 14:11:52.496: INFO: Waiting for pod pod-secrets-f2bf1f67-1847-4ea6-a74b-6afbef284a2b to disappear
Aug 28 14:11:52.557: INFO: Pod pod-secrets-f2bf1f67-1847-4ea6-a74b-6afbef284a2b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:11:52.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6345" for this suite.

• [SLOW TEST:7.863 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2466,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:11:52.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:11:54.789: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552" in namespace "downward-api-7031" to be "Succeeded or Failed"
Aug 28 14:11:55.019: INFO: Pod "downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552": Phase="Pending", Reason="", readiness=false. Elapsed: 230.184328ms
Aug 28 14:11:57.081: INFO: Pod "downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292567764s
Aug 28 14:11:59.359: INFO: Pod "downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569727494s
Aug 28 14:12:01.404: INFO: Pod "downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552": Phase="Running", Reason="", readiness=true. Elapsed: 6.615055647s
Aug 28 14:12:03.409: INFO: Pod "downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.620571326s
STEP: Saw pod success
Aug 28 14:12:03.410: INFO: Pod "downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552" satisfied condition "Succeeded or Failed"
Aug 28 14:12:03.415: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552 container client-container: 
STEP: delete the pod
Aug 28 14:12:03.922: INFO: Waiting for pod downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552 to disappear
Aug 28 14:12:03.932: INFO: Pod downwardapi-volume-31d166d7-ad0e-48dd-91a5-2d008294a552 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:12:03.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7031" for this suite.

• [SLOW TEST:11.366 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2504,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:12:03.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Aug 28 14:12:04.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config api-versions'
Aug 28 14:12:05.651: INFO: stderr: ""
Aug 28 14:12:05.651: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:12:05.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3322" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":157,"skipped":2540,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:12:05.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-3012
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3012 to expose endpoints map[]
Aug 28 14:12:06.179: INFO: successfully validated that service endpoint-test2 in namespace services-3012 exposes endpoints map[] (24.789236ms elapsed)
STEP: Creating pod pod1 in namespace services-3012
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3012 to expose endpoints map[pod1:[80]]
Aug 28 14:12:09.392: INFO: successfully validated that service endpoint-test2 in namespace services-3012 exposes endpoints map[pod1:[80]] (3.168968405s elapsed)
STEP: Creating pod pod2 in namespace services-3012
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3012 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 28 14:12:14.585: INFO: Unexpected endpoints: found map[449ec39c-dad2-4c2d-bf81-3a3f59ee5b5f:[80]], expected map[pod1:[80] pod2:[80]] (5.173221191s elapsed, will retry)
Aug 28 14:12:15.602: INFO: successfully validated that service endpoint-test2 in namespace services-3012 exposes endpoints map[pod1:[80] pod2:[80]] (6.189921034s elapsed)
STEP: Deleting pod pod1 in namespace services-3012
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3012 to expose endpoints map[pod2:[80]]
Aug 28 14:12:16.846: INFO: successfully validated that service endpoint-test2 in namespace services-3012 exposes endpoints map[pod2:[80]] (1.237548381s elapsed)
STEP: Deleting pod pod2 in namespace services-3012
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3012 to expose endpoints map[]
Aug 28 14:12:17.405: INFO: successfully validated that service endpoint-test2 in namespace services-3012 exposes endpoints map[] (552.231805ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:12:18.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3012" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.242 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":158,"skipped":2572,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:12:18.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 28 14:12:19.281: INFO: PodSpec: initContainers in spec.initContainers
Aug 28 14:13:14.344: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e3f30f14-0f2e-4bcc-a847-f064ee46f8a3", GenerateName:"", Namespace:"init-container-2640", SelfLink:"/api/v1/namespaces/init-container-2640/pods/pod-init-e3f30f14-0f2e-4bcc-a847-f064ee46f8a3", UID:"268733a7-2a6c-4ddf-a90f-aad6c98cb53d", ResourceVersion:"1768667", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734220739, loc:(*time.Location)(0x74b2e20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"279300167"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x40049121a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40049121c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x40049121e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4004912200)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2dp5j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4000ee6340), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2dp5j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2dp5j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2dp5j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4002f6e2b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4001dac380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4002f6e340)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4002f6e360)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4002f6e368), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4002f6e36c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220739, loc:(*time.Location)(0x74b2e20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220739, loc:(*time.Location)(0x74b2e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220739, loc:(*time.Location)(0x74b2e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220739, loc:(*time.Location)(0x74b2e20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.2.8", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.8"}}, StartTime:(*v1.Time)(0x4004912220), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x4004912260), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x4001dac460)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3f731e7950709b12e839a63448b6c98c87768225b12a8980c920d5931e1f864a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4004912280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4004912240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x4002f6e3ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:13:14.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2640" for this suite.

• [SLOW TEST:55.562 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":159,"skipped":2684,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:13:14.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:13:19.401: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:13:22.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220799, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220799, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220799, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220798, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:13:24.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220799, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220799, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220799, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220798, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:13:26.475: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220799, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220799, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220799, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734220798, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:13:29.059: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:13:29.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7578" for this suite.
STEP: Destroying namespace "webhook-7578-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.166 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":160,"skipped":2688,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:13:29.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:13:30.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-1258
I0828 14:13:30.651095      11 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1258, replica count: 1
I0828 14:13:31.702394      11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:13:32.703041      11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:13:33.703614      11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:13:34.704399      11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:13:35.705232      11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:13:36.705867      11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:13:37.706552      11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 28 14:13:37.895: INFO: Created: latency-svc-l2z5p
Aug 28 14:13:38.004: INFO: Got endpoints: latency-svc-l2z5p [196.06142ms]
Aug 28 14:13:38.300: INFO: Created: latency-svc-lqvb7
Aug 28 14:13:38.651: INFO: Got endpoints: latency-svc-lqvb7 [645.701472ms]
Aug 28 14:13:39.007: INFO: Created: latency-svc-jg7zf
Aug 28 14:13:39.066: INFO: Got endpoints: latency-svc-jg7zf [1.058325743s]
Aug 28 14:13:39.155: INFO: Created: latency-svc-8vnnf
Aug 28 14:13:39.239: INFO: Got endpoints: latency-svc-8vnnf [1.231491861s]
Aug 28 14:13:39.240: INFO: Created: latency-svc-ngzhp
Aug 28 14:13:39.320: INFO: Got endpoints: latency-svc-ngzhp [1.310502045s]
Aug 28 14:13:39.362: INFO: Created: latency-svc-btn8l
Aug 28 14:13:39.385: INFO: Got endpoints: latency-svc-btn8l [1.379520122s]
Aug 28 14:13:39.467: INFO: Created: latency-svc-kmdp4
Aug 28 14:13:39.471: INFO: Got endpoints: latency-svc-kmdp4 [1.464435945s]
Aug 28 14:13:39.521: INFO: Created: latency-svc-tmwpv
Aug 28 14:13:39.534: INFO: Got endpoints: latency-svc-tmwpv [1.527275575s]
Aug 28 14:13:39.560: INFO: Created: latency-svc-rrgt2
Aug 28 14:13:39.620: INFO: Got endpoints: latency-svc-rrgt2 [1.611584277s]
Aug 28 14:13:39.653: INFO: Created: latency-svc-t2ndr
Aug 28 14:13:39.716: INFO: Got endpoints: latency-svc-t2ndr [1.708097746s]
Aug 28 14:13:39.776: INFO: Created: latency-svc-js6bg
Aug 28 14:13:39.776: INFO: Got endpoints: latency-svc-js6bg [1.76798641s]
Aug 28 14:13:39.815: INFO: Created: latency-svc-kxdzw
Aug 28 14:13:39.828: INFO: Got endpoints: latency-svc-kxdzw [1.819151747s]
Aug 28 14:13:39.865: INFO: Created: latency-svc-snw72
Aug 28 14:13:39.937: INFO: Got endpoints: latency-svc-snw72 [1.928186062s]
Aug 28 14:13:39.949: INFO: Created: latency-svc-gqb2r
Aug 28 14:13:39.954: INFO: Got endpoints: latency-svc-gqb2r [1.944801021s]
Aug 28 14:13:39.973: INFO: Created: latency-svc-qkghv
Aug 28 14:13:39.991: INFO: Got endpoints: latency-svc-qkghv [1.981538714s]
Aug 28 14:13:40.009: INFO: Created: latency-svc-jprzq
Aug 28 14:13:40.021: INFO: Got endpoints: latency-svc-jprzq [2.011185994s]
Aug 28 14:13:40.083: INFO: Created: latency-svc-ft5ct
Aug 28 14:13:40.120: INFO: Created: latency-svc-fgdrl
Aug 28 14:13:40.122: INFO: Got endpoints: latency-svc-ft5ct [1.470327577s]
Aug 28 14:13:40.178: INFO: Got endpoints: latency-svc-fgdrl [1.111856853s]
Aug 28 14:13:40.247: INFO: Created: latency-svc-sxv49
Aug 28 14:13:40.256: INFO: Got endpoints: latency-svc-sxv49 [1.01668676s]
Aug 28 14:13:40.276: INFO: Created: latency-svc-llh6g
Aug 28 14:13:40.294: INFO: Got endpoints: latency-svc-llh6g [973.783745ms]
Aug 28 14:13:40.346: INFO: Created: latency-svc-7tzcn
Aug 28 14:13:40.383: INFO: Got endpoints: latency-svc-7tzcn [997.520914ms]
Aug 28 14:13:40.403: INFO: Created: latency-svc-9ntnj
Aug 28 14:13:40.412: INFO: Got endpoints: latency-svc-9ntnj [940.419567ms]
Aug 28 14:13:40.430: INFO: Created: latency-svc-g54w6
Aug 28 14:13:40.444: INFO: Got endpoints: latency-svc-g54w6 [909.843ms]
Aug 28 14:13:40.463: INFO: Created: latency-svc-m6jsl
Aug 28 14:13:40.480: INFO: Got endpoints: latency-svc-m6jsl [859.981829ms]
Aug 28 14:13:40.536: INFO: Created: latency-svc-b5k7c
Aug 28 14:13:40.553: INFO: Got endpoints: latency-svc-b5k7c [836.941081ms]
Aug 28 14:13:40.601: INFO: Created: latency-svc-4qvfb
Aug 28 14:13:40.612: INFO: Got endpoints: latency-svc-4qvfb [835.410752ms]
Aug 28 14:13:40.670: INFO: Created: latency-svc-jcrn7
Aug 28 14:13:40.698: INFO: Created: latency-svc-sbmjp
Aug 28 14:13:40.700: INFO: Got endpoints: latency-svc-jcrn7 [871.749222ms]
Aug 28 14:13:40.736: INFO: Got endpoints: latency-svc-sbmjp [798.565041ms]
Aug 28 14:13:40.840: INFO: Created: latency-svc-dxcgr
Aug 28 14:13:40.847: INFO: Got endpoints: latency-svc-dxcgr [892.707571ms]
Aug 28 14:13:40.879: INFO: Created: latency-svc-vcsnz
Aug 28 14:13:40.901: INFO: Got endpoints: latency-svc-vcsnz [909.978313ms]
Aug 28 14:13:40.931: INFO: Created: latency-svc-5b7ql
Aug 28 14:13:41.011: INFO: Got endpoints: latency-svc-5b7ql [989.198806ms]
Aug 28 14:13:41.013: INFO: Created: latency-svc-hr98w
Aug 28 14:13:41.021: INFO: Got endpoints: latency-svc-hr98w [899.353589ms]
Aug 28 14:13:41.074: INFO: Created: latency-svc-2wk8c
Aug 28 14:13:41.088: INFO: Got endpoints: latency-svc-2wk8c [909.871312ms]
Aug 28 14:13:41.110: INFO: Created: latency-svc-xh8fz
Aug 28 14:13:41.204: INFO: Got endpoints: latency-svc-xh8fz [947.520371ms]
Aug 28 14:13:41.205: INFO: Created: latency-svc-69c42
Aug 28 14:13:41.235: INFO: Got endpoints: latency-svc-69c42 [940.706298ms]
Aug 28 14:13:41.278: INFO: Created: latency-svc-7vxrd
Aug 28 14:13:41.390: INFO: Got endpoints: latency-svc-7vxrd [1.006333226s]
Aug 28 14:13:41.391: INFO: Created: latency-svc-fd7vp
Aug 28 14:13:41.400: INFO: Got endpoints: latency-svc-fd7vp [988.006208ms]
Aug 28 14:13:41.435: INFO: Created: latency-svc-f67jd
Aug 28 14:13:41.449: INFO: Got endpoints: latency-svc-f67jd [1.004600687s]
Aug 28 14:13:41.468: INFO: Created: latency-svc-pd2rp
Aug 28 14:13:41.551: INFO: Got endpoints: latency-svc-pd2rp [1.070935249s]
Aug 28 14:13:41.554: INFO: Created: latency-svc-mzpfr
Aug 28 14:13:41.565: INFO: Got endpoints: latency-svc-mzpfr [1.011673387s]
Aug 28 14:13:41.765: INFO: Created: latency-svc-2ppvh
Aug 28 14:13:41.806: INFO: Got endpoints: latency-svc-2ppvh [1.194129585s]
Aug 28 14:13:41.930: INFO: Created: latency-svc-mz9w9
Aug 28 14:13:41.943: INFO: Got endpoints: latency-svc-mz9w9 [1.242914661s]
Aug 28 14:13:41.966: INFO: Created: latency-svc-fjr4l
Aug 28 14:13:41.978: INFO: Got endpoints: latency-svc-fjr4l [1.242727456s]
Aug 28 14:13:41.998: INFO: Created: latency-svc-9spcv
Aug 28 14:13:42.028: INFO: Got endpoints: latency-svc-9spcv [1.180834929s]
Aug 28 14:13:42.081: INFO: Created: latency-svc-h6rm6
Aug 28 14:13:42.117: INFO: Got endpoints: latency-svc-h6rm6 [1.214828021s]
Aug 28 14:13:42.155: INFO: Created: latency-svc-rm77q
Aug 28 14:13:42.271: INFO: Got endpoints: latency-svc-rm77q [1.260615556s]
Aug 28 14:13:42.274: INFO: Created: latency-svc-ch5cq
Aug 28 14:13:42.285: INFO: Got endpoints: latency-svc-ch5cq [1.263919125s]
Aug 28 14:13:42.337: INFO: Created: latency-svc-6vxn8
Aug 28 14:13:42.364: INFO: Got endpoints: latency-svc-6vxn8 [1.275439969s]
Aug 28 14:13:42.407: INFO: Created: latency-svc-44kbg
Aug 28 14:13:42.410: INFO: Got endpoints: latency-svc-44kbg [1.20560587s]
Aug 28 14:13:42.477: INFO: Created: latency-svc-p6fb4
Aug 28 14:13:42.490: INFO: Got endpoints: latency-svc-p6fb4 [1.255206014s]
Aug 28 14:13:42.562: INFO: Created: latency-svc-5kq97
Aug 28 14:13:42.591: INFO: Got endpoints: latency-svc-5kq97 [1.201025292s]
Aug 28 14:13:42.616: INFO: Created: latency-svc-wc64x
Aug 28 14:13:42.629: INFO: Got endpoints: latency-svc-wc64x [1.228248455s]
Aug 28 14:13:42.690: INFO: Created: latency-svc-cwhsk
Aug 28 14:13:42.706: INFO: Got endpoints: latency-svc-cwhsk [1.257218394s]
Aug 28 14:13:42.724: INFO: Created: latency-svc-gjlzs
Aug 28 14:13:42.743: INFO: Got endpoints: latency-svc-gjlzs [1.191841378s]
Aug 28 14:13:42.765: INFO: Created: latency-svc-2f5hr
Aug 28 14:13:42.781: INFO: Got endpoints: latency-svc-2f5hr [1.215617701s]
Aug 28 14:13:42.834: INFO: Created: latency-svc-hclbg
Aug 28 14:13:42.858: INFO: Got endpoints: latency-svc-hclbg [1.051060232s]
Aug 28 14:13:42.934: INFO: Created: latency-svc-fqrcz
Aug 28 14:13:43.066: INFO: Got endpoints: latency-svc-fqrcz [1.122152989s]
Aug 28 14:13:43.067: INFO: Created: latency-svc-r6rgr
Aug 28 14:13:43.116: INFO: Got endpoints: latency-svc-r6rgr [1.137684663s]
Aug 28 14:13:43.217: INFO: Created: latency-svc-c86bt
Aug 28 14:13:43.234: INFO: Got endpoints: latency-svc-c86bt [1.206004978s]
Aug 28 14:13:43.270: INFO: Created: latency-svc-k6pnv
Aug 28 14:13:43.383: INFO: Got endpoints: latency-svc-k6pnv [1.265867718s]
Aug 28 14:13:43.426: INFO: Created: latency-svc-tlfcg
Aug 28 14:13:43.438: INFO: Got endpoints: latency-svc-tlfcg [1.166367866s]
Aug 28 14:13:43.564: INFO: Created: latency-svc-jv9ps
Aug 28 14:13:43.939: INFO: Got endpoints: latency-svc-jv9ps [1.653926735s]
Aug 28 14:13:43.942: INFO: Created: latency-svc-8wz4f
Aug 28 14:13:44.213: INFO: Got endpoints: latency-svc-8wz4f [1.848898592s]
Aug 28 14:13:44.658: INFO: Created: latency-svc-4brr2
Aug 28 14:13:44.739: INFO: Got endpoints: latency-svc-4brr2 [2.32874042s]
Aug 28 14:13:44.875: INFO: Created: latency-svc-trvz7
Aug 28 14:13:44.886: INFO: Got endpoints: latency-svc-trvz7 [2.395362974s]
Aug 28 14:13:44.934: INFO: Created: latency-svc-5fgdn
Aug 28 14:13:45.020: INFO: Got endpoints: latency-svc-5fgdn [2.429120409s]
Aug 28 14:13:45.061: INFO: Created: latency-svc-pj27l
Aug 28 14:13:45.080: INFO: Got endpoints: latency-svc-pj27l [2.450872655s]
Aug 28 14:13:45.109: INFO: Created: latency-svc-zgkcw
Aug 28 14:13:45.206: INFO: Got endpoints: latency-svc-zgkcw [2.499999476s]
Aug 28 14:13:45.279: INFO: Created: latency-svc-9ff98
Aug 28 14:13:45.584: INFO: Got endpoints: latency-svc-9ff98 [2.840162365s]
Aug 28 14:13:45.912: INFO: Created: latency-svc-6mmfp
Aug 28 14:13:45.922: INFO: Got endpoints: latency-svc-6mmfp [3.141051886s]
Aug 28 14:13:45.957: INFO: Created: latency-svc-2ldjx
Aug 28 14:13:45.988: INFO: Got endpoints: latency-svc-2ldjx [3.130035594s]
Aug 28 14:13:46.177: INFO: Created: latency-svc-kct9h
Aug 28 14:13:46.355: INFO: Got endpoints: latency-svc-kct9h [3.288729874s]
Aug 28 14:13:46.370: INFO: Created: latency-svc-7w9qh
Aug 28 14:13:46.419: INFO: Got endpoints: latency-svc-7w9qh [3.302869934s]
Aug 28 14:13:46.535: INFO: Created: latency-svc-g5lg4
Aug 28 14:13:46.551: INFO: Got endpoints: latency-svc-g5lg4 [3.316578023s]
Aug 28 14:13:46.592: INFO: Created: latency-svc-6k5ph
Aug 28 14:13:46.607: INFO: Got endpoints: latency-svc-6k5ph [3.223683569s]
Aug 28 14:13:46.663: INFO: Created: latency-svc-mqqdb
Aug 28 14:13:46.745: INFO: Got endpoints: latency-svc-mqqdb [3.306609172s]
Aug 28 14:13:46.803: INFO: Created: latency-svc-b49gg
Aug 28 14:13:46.815: INFO: Got endpoints: latency-svc-b49gg [2.875849462s]
Aug 28 14:13:46.860: INFO: Created: latency-svc-9nkwt
Aug 28 14:13:46.891: INFO: Got endpoints: latency-svc-9nkwt [2.678210638s]
Aug 28 14:13:46.978: INFO: Created: latency-svc-956p2
Aug 28 14:13:47.002: INFO: Got endpoints: latency-svc-956p2 [2.263557567s]
Aug 28 14:13:47.058: INFO: Created: latency-svc-qlbjc
Aug 28 14:13:47.074: INFO: Got endpoints: latency-svc-qlbjc [2.187754369s]
Aug 28 14:13:47.125: INFO: Created: latency-svc-snjgd
Aug 28 14:13:47.153: INFO: Got endpoints: latency-svc-snjgd [2.132718707s]
Aug 28 14:13:47.223: INFO: Created: latency-svc-znpmp
Aug 28 14:13:47.241: INFO: Got endpoints: latency-svc-znpmp [2.160724365s]
Aug 28 14:13:47.274: INFO: Created: latency-svc-mw455
Aug 28 14:13:47.294: INFO: Got endpoints: latency-svc-mw455 [2.087213584s]
Aug 28 14:13:47.427: INFO: Created: latency-svc-fjhtf
Aug 28 14:13:47.428: INFO: Got endpoints: latency-svc-fjhtf [1.843944805s]
Aug 28 14:13:47.475: INFO: Created: latency-svc-l8qzt
Aug 28 14:13:47.515: INFO: Got endpoints: latency-svc-l8qzt [1.592791247s]
Aug 28 14:13:47.601: INFO: Created: latency-svc-g6tjg
Aug 28 14:13:47.611: INFO: Got endpoints: latency-svc-g6tjg [1.623133969s]
Aug 28 14:13:47.657: INFO: Created: latency-svc-cc7mk
Aug 28 14:13:47.671: INFO: Got endpoints: latency-svc-cc7mk [1.3160777s]
Aug 28 14:13:47.694: INFO: Created: latency-svc-4tmp4
Aug 28 14:13:47.785: INFO: Got endpoints: latency-svc-4tmp4 [1.364959817s]
Aug 28 14:13:47.786: INFO: Created: latency-svc-qj42w
Aug 28 14:13:47.791: INFO: Got endpoints: latency-svc-qj42w [1.240543104s]
Aug 28 14:13:47.824: INFO: Created: latency-svc-47vh5
Aug 28 14:13:47.841: INFO: Got endpoints: latency-svc-47vh5 [1.23334393s]
Aug 28 14:13:47.860: INFO: Created: latency-svc-4vqpt
Aug 28 14:13:47.877: INFO: Got endpoints: latency-svc-4vqpt [1.131803199s]
Aug 28 14:13:47.944: INFO: Created: latency-svc-cxl8n
Aug 28 14:13:47.961: INFO: Got endpoints: latency-svc-cxl8n [1.144948781s]
Aug 28 14:13:47.994: INFO: Created: latency-svc-qxwwk
Aug 28 14:13:48.009: INFO: Got endpoints: latency-svc-qxwwk [1.117713689s]
Aug 28 14:13:48.047: INFO: Created: latency-svc-mlfnn
Aug 28 14:13:48.051: INFO: Got endpoints: latency-svc-mlfnn [1.048505677s]
Aug 28 14:13:48.082: INFO: Created: latency-svc-k9pvk
Aug 28 14:13:48.094: INFO: Got endpoints: latency-svc-k9pvk [1.020048032s]
Aug 28 14:13:48.111: INFO: Created: latency-svc-xc7lr
Aug 28 14:13:48.124: INFO: Got endpoints: latency-svc-xc7lr [970.32059ms]
Aug 28 14:13:48.141: INFO: Created: latency-svc-jm5d9
Aug 28 14:13:48.210: INFO: Got endpoints: latency-svc-jm5d9 [969.307662ms]
Aug 28 14:13:48.212: INFO: Created: latency-svc-ncw6l
Aug 28 14:13:48.243: INFO: Got endpoints: latency-svc-ncw6l [949.489337ms]
Aug 28 14:13:48.365: INFO: Created: latency-svc-dglzd
Aug 28 14:13:48.391: INFO: Got endpoints: latency-svc-dglzd [963.095125ms]
Aug 28 14:13:48.417: INFO: Created: latency-svc-k5j8h
Aug 28 14:13:48.425: INFO: Got endpoints: latency-svc-k5j8h [909.701899ms]
Aug 28 14:13:48.451: INFO: Created: latency-svc-wkwpz
Aug 28 14:13:48.515: INFO: Got endpoints: latency-svc-wkwpz [904.043259ms]
Aug 28 14:13:48.561: INFO: Created: latency-svc-xqsl4
Aug 28 14:13:48.606: INFO: Got endpoints: latency-svc-xqsl4 [935.024653ms]
Aug 28 14:13:48.666: INFO: Created: latency-svc-8xh9f
Aug 28 14:13:48.677: INFO: Got endpoints: latency-svc-8xh9f [892.12296ms]
Aug 28 14:13:48.741: INFO: Created: latency-svc-mlm7c
Aug 28 14:13:48.764: INFO: Got endpoints: latency-svc-mlm7c [972.143237ms]
Aug 28 14:13:48.803: INFO: Created: latency-svc-n2lnv
Aug 28 14:13:48.826: INFO: Got endpoints: latency-svc-n2lnv [984.764927ms]
Aug 28 14:13:48.855: INFO: Created: latency-svc-gfklb
Aug 28 14:13:48.871: INFO: Got endpoints: latency-svc-gfklb [994.015095ms]
Aug 28 14:13:48.889: INFO: Created: latency-svc-7lp8k
Aug 28 14:13:48.901: INFO: Got endpoints: latency-svc-7lp8k [940.688851ms]
Aug 28 14:13:48.957: INFO: Created: latency-svc-9msm9
Aug 28 14:13:48.965: INFO: Got endpoints: latency-svc-9msm9 [955.896551ms]
Aug 28 14:13:49.001: INFO: Created: latency-svc-ktmxr
Aug 28 14:13:49.014: INFO: Got endpoints: latency-svc-ktmxr [962.486015ms]
Aug 28 14:13:49.036: INFO: Created: latency-svc-j86nb
Aug 28 14:13:49.056: INFO: Got endpoints: latency-svc-j86nb [961.768911ms]
Aug 28 14:13:49.104: INFO: Created: latency-svc-drqs2
Aug 28 14:13:49.122: INFO: Got endpoints: latency-svc-drqs2 [997.883427ms]
Aug 28 14:13:49.161: INFO: Created: latency-svc-p2zpc
Aug 28 14:13:49.185: INFO: Got endpoints: latency-svc-p2zpc [974.306219ms]
Aug 28 14:13:49.244: INFO: Created: latency-svc-5dbqx
Aug 28 14:13:49.262: INFO: Got endpoints: latency-svc-5dbqx [1.018350656s]
Aug 28 14:13:49.289: INFO: Created: latency-svc-wpcs9
Aug 28 14:13:49.301: INFO: Got endpoints: latency-svc-wpcs9 [909.316266ms]
Aug 28 14:13:49.334: INFO: Created: latency-svc-rl72j
Aug 28 14:13:49.402: INFO: Got endpoints: latency-svc-rl72j [976.454469ms]
Aug 28 14:13:49.403: INFO: Created: latency-svc-n4gqm
Aug 28 14:13:49.412: INFO: Got endpoints: latency-svc-n4gqm [896.345586ms]
Aug 28 14:13:49.434: INFO: Created: latency-svc-qnvtq
Aug 28 14:13:49.468: INFO: Got endpoints: latency-svc-qnvtq [861.421182ms]
Aug 28 14:13:49.563: INFO: Created: latency-svc-7j2hq
Aug 28 14:13:49.567: INFO: Got endpoints: latency-svc-7j2hq [890.145959ms]
Aug 28 14:13:49.646: INFO: Created: latency-svc-dk9b6
Aug 28 14:13:49.658: INFO: Got endpoints: latency-svc-dk9b6 [894.57892ms]
Aug 28 14:13:49.705: INFO: Created: latency-svc-fdnhg
Aug 28 14:13:49.731: INFO: Created: latency-svc-tpxq5
Aug 28 14:13:49.731: INFO: Got endpoints: latency-svc-fdnhg [905.42201ms]
Aug 28 14:13:49.743: INFO: Got endpoints: latency-svc-tpxq5 [871.646562ms]
Aug 28 14:13:49.761: INFO: Created: latency-svc-7dprj
Aug 28 14:13:49.780: INFO: Got endpoints: latency-svc-7dprj [878.06431ms]
Aug 28 14:13:49.798: INFO: Created: latency-svc-lrq4z
Aug 28 14:13:49.844: INFO: Got endpoints: latency-svc-lrq4z [878.452267ms]
Aug 28 14:13:49.854: INFO: Created: latency-svc-2qdvb
Aug 28 14:13:49.870: INFO: Got endpoints: latency-svc-2qdvb [855.821119ms]
Aug 28 14:13:49.891: INFO: Created: latency-svc-gj9dj
Aug 28 14:13:49.906: INFO: Got endpoints: latency-svc-gj9dj [850.054465ms]
Aug 28 14:13:49.933: INFO: Created: latency-svc-l9hgs
Aug 28 14:13:49.977: INFO: Got endpoints: latency-svc-l9hgs [854.861874ms]
Aug 28 14:13:49.991: INFO: Created: latency-svc-p75qx
Aug 28 14:13:50.004: INFO: Got endpoints: latency-svc-p75qx [818.71661ms]
Aug 28 14:13:50.027: INFO: Created: latency-svc-v97gx
Aug 28 14:13:50.039: INFO: Got endpoints: latency-svc-v97gx [776.905278ms]
Aug 28 14:13:50.062: INFO: Created: latency-svc-sdxwx
Aug 28 14:13:50.076: INFO: Got endpoints: latency-svc-sdxwx [774.911669ms]
Aug 28 14:13:50.132: INFO: Created: latency-svc-77jff
Aug 28 14:13:50.149: INFO: Got endpoints: latency-svc-77jff [747.611653ms]
Aug 28 14:13:50.188: INFO: Created: latency-svc-v6c44
Aug 28 14:13:50.204: INFO: Got endpoints: latency-svc-v6c44 [791.740802ms]
Aug 28 14:13:50.271: INFO: Created: latency-svc-m6qvz
Aug 28 14:13:50.276: INFO: Got endpoints: latency-svc-m6qvz [807.797572ms]
Aug 28 14:13:50.336: INFO: Created: latency-svc-6zxvj
Aug 28 14:13:50.407: INFO: Got endpoints: latency-svc-6zxvj [839.124593ms]
Aug 28 14:13:50.415: INFO: Created: latency-svc-8dtjg
Aug 28 14:13:50.434: INFO: Got endpoints: latency-svc-8dtjg [775.856916ms]
Aug 28 14:13:50.458: INFO: Created: latency-svc-42d4m
Aug 28 14:13:50.475: INFO: Got endpoints: latency-svc-42d4m [743.436379ms]
Aug 28 14:13:50.497: INFO: Created: latency-svc-mqhrs
Aug 28 14:13:50.558: INFO: Got endpoints: latency-svc-mqhrs [815.085256ms]
Aug 28 14:13:50.575: INFO: Created: latency-svc-vh7sd
Aug 28 14:13:50.589: INFO: Got endpoints: latency-svc-vh7sd [809.095509ms]
Aug 28 14:13:50.607: INFO: Created: latency-svc-g5m4l
Aug 28 14:13:50.620: INFO: Got endpoints: latency-svc-g5m4l [776.104114ms]
Aug 28 14:13:50.637: INFO: Created: latency-svc-xhs6j
Aug 28 14:13:50.718: INFO: Got endpoints: latency-svc-xhs6j [848.392313ms]
Aug 28 14:13:50.732: INFO: Created: latency-svc-dh6dl
Aug 28 14:13:50.762: INFO: Got endpoints: latency-svc-dh6dl [855.656666ms]
Aug 28 14:13:50.858: INFO: Created: latency-svc-p44t4
Aug 28 14:13:50.866: INFO: Got endpoints: latency-svc-p44t4 [889.206208ms]
Aug 28 14:13:50.892: INFO: Created: latency-svc-2sz6q
Aug 28 14:13:50.902: INFO: Got endpoints: latency-svc-2sz6q [898.142753ms]
Aug 28 14:13:50.921: INFO: Created: latency-svc-gmwj4
Aug 28 14:13:50.947: INFO: Got endpoints: latency-svc-gmwj4 [908.158421ms]
Aug 28 14:13:51.006: INFO: Created: latency-svc-8rk5p
Aug 28 14:13:51.029: INFO: Got endpoints: latency-svc-8rk5p [953.19828ms]
Aug 28 14:13:51.085: INFO: Created: latency-svc-frpvr
Aug 28 14:13:51.103: INFO: Got endpoints: latency-svc-frpvr [953.867968ms]
Aug 28 14:13:51.150: INFO: Created: latency-svc-hklgm
Aug 28 14:13:51.200: INFO: Got endpoints: latency-svc-hklgm [995.573662ms]
Aug 28 14:13:51.236: INFO: Created: latency-svc-2vdnp
Aug 28 14:13:51.287: INFO: Got endpoints: latency-svc-2vdnp [1.010709791s]
Aug 28 14:13:51.305: INFO: Created: latency-svc-zvqtw
Aug 28 14:13:51.324: INFO: Got endpoints: latency-svc-zvqtw [917.529269ms]
Aug 28 14:13:51.355: INFO: Created: latency-svc-8x67l
Aug 28 14:13:51.372: INFO: Got endpoints: latency-svc-8x67l [937.410021ms]
Aug 28 14:13:51.434: INFO: Created: latency-svc-tjbfg
Aug 28 14:13:51.439: INFO: Got endpoints: latency-svc-tjbfg [963.587223ms]
Aug 28 14:13:51.464: INFO: Created: latency-svc-4svvv
Aug 28 14:13:51.477: INFO: Got endpoints: latency-svc-4svvv [918.801026ms]
Aug 28 14:13:51.520: INFO: Created: latency-svc-4jxpr
Aug 28 14:13:51.617: INFO: Got endpoints: latency-svc-4jxpr [1.027448534s]
Aug 28 14:13:51.621: INFO: Created: latency-svc-x89vm
Aug 28 14:13:51.628: INFO: Got endpoints: latency-svc-x89vm [1.007519573s]
Aug 28 14:13:51.649: INFO: Created: latency-svc-68x8q
Aug 28 14:13:51.666: INFO: Got endpoints: latency-svc-68x8q [947.369036ms]
Aug 28 14:13:51.686: INFO: Created: latency-svc-tzqzd
Aug 28 14:13:51.755: INFO: Got endpoints: latency-svc-tzqzd [993.006127ms]
Aug 28 14:13:51.772: INFO: Created: latency-svc-cjw89
Aug 28 14:13:51.792: INFO: Got endpoints: latency-svc-cjw89 [925.262183ms]
Aug 28 14:13:51.824: INFO: Created: latency-svc-lvt98
Aug 28 14:13:51.842: INFO: Got endpoints: latency-svc-lvt98 [939.313195ms]
Aug 28 14:13:51.917: INFO: Created: latency-svc-w9shg
Aug 28 14:13:51.934: INFO: Got endpoints: latency-svc-w9shg [986.39985ms]
Aug 28 14:13:51.969: INFO: Created: latency-svc-khrng
Aug 28 14:13:51.972: INFO: Got endpoints: latency-svc-khrng [942.184907ms]
Aug 28 14:13:52.001: INFO: Created: latency-svc-8mrs2
Aug 28 14:13:52.012: INFO: Got endpoints: latency-svc-8mrs2 [908.174027ms]
Aug 28 14:13:52.062: INFO: Created: latency-svc-cmmjt
Aug 28 14:13:52.082: INFO: Got endpoints: latency-svc-cmmjt [881.994279ms]
Aug 28 14:13:52.126: INFO: Created: latency-svc-z6fh5
Aug 28 14:13:52.161: INFO: Got endpoints: latency-svc-z6fh5 [873.841833ms]
Aug 28 14:13:52.226: INFO: Created: latency-svc-ttxlg
Aug 28 14:13:52.239: INFO: Got endpoints: latency-svc-ttxlg [914.345517ms]
Aug 28 14:13:52.287: INFO: Created: latency-svc-rzkdx
Aug 28 14:13:52.422: INFO: Got endpoints: latency-svc-rzkdx [1.049632385s]
Aug 28 14:13:52.433: INFO: Created: latency-svc-m56t6
Aug 28 14:13:52.502: INFO: Got endpoints: latency-svc-m56t6 [1.063345764s]
Aug 28 14:13:52.811: INFO: Created: latency-svc-nnns5
Aug 28 14:13:53.069: INFO: Got endpoints: latency-svc-nnns5 [1.59158245s]
Aug 28 14:13:53.072: INFO: Created: latency-svc-gb8fz
Aug 28 14:13:53.122: INFO: Got endpoints: latency-svc-gb8fz [1.505418009s]
Aug 28 14:13:53.248: INFO: Created: latency-svc-lfkdw
Aug 28 14:13:53.389: INFO: Got endpoints: latency-svc-lfkdw [1.760821035s]
Aug 28 14:13:53.395: INFO: Created: latency-svc-wfpfv
Aug 28 14:13:53.414: INFO: Got endpoints: latency-svc-wfpfv [1.748190432s]
Aug 28 14:13:53.927: INFO: Created: latency-svc-x94jj
Aug 28 14:13:53.959: INFO: Got endpoints: latency-svc-x94jj [2.204237023s]
Aug 28 14:13:54.059: INFO: Created: latency-svc-gz2n4
Aug 28 14:13:54.073: INFO: Got endpoints: latency-svc-gz2n4 [2.28168496s]
Aug 28 14:13:54.112: INFO: Created: latency-svc-d8qs4
Aug 28 14:13:54.179: INFO: Got endpoints: latency-svc-d8qs4 [2.337583431s]
Aug 28 14:13:54.181: INFO: Created: latency-svc-v58sv
Aug 28 14:13:54.188: INFO: Got endpoints: latency-svc-v58sv [2.254100158s]
Aug 28 14:13:54.220: INFO: Created: latency-svc-tttvv
Aug 28 14:13:54.263: INFO: Got endpoints: latency-svc-tttvv [2.291056475s]
Aug 28 14:13:54.331: INFO: Created: latency-svc-cl6f5
Aug 28 14:13:54.352: INFO: Got endpoints: latency-svc-cl6f5 [2.340027886s]
Aug 28 14:13:54.371: INFO: Created: latency-svc-7lf82
Aug 28 14:13:54.390: INFO: Got endpoints: latency-svc-7lf82 [2.307696742s]
Aug 28 14:13:54.407: INFO: Created: latency-svc-47ld6
Aug 28 14:13:54.420: INFO: Got endpoints: latency-svc-47ld6 [2.259334652s]
Aug 28 14:13:54.473: INFO: Created: latency-svc-qrhbq
Aug 28 14:13:54.500: INFO: Got endpoints: latency-svc-qrhbq [2.260488745s]
Aug 28 14:13:54.500: INFO: Created: latency-svc-9d4kr
Aug 28 14:13:54.525: INFO: Got endpoints: latency-svc-9d4kr [2.102394357s]
Aug 28 14:13:54.554: INFO: Created: latency-svc-klcm9
Aug 28 14:13:54.558: INFO: Got endpoints: latency-svc-klcm9 [2.055424344s]
Aug 28 14:13:54.624: INFO: Created: latency-svc-4dg8h
Aug 28 14:13:54.636: INFO: Got endpoints: latency-svc-4dg8h [1.567151389s]
Aug 28 14:13:54.666: INFO: Created: latency-svc-5k4zv
Aug 28 14:13:54.691: INFO: Got endpoints: latency-svc-5k4zv [1.568908806s]
Aug 28 14:13:54.754: INFO: Created: latency-svc-rfl85
Aug 28 14:13:54.777: INFO: Created: latency-svc-n6w7s
Aug 28 14:13:54.777: INFO: Got endpoints: latency-svc-rfl85 [1.38813595s]
Aug 28 14:13:54.827: INFO: Created: latency-svc-2t97r
Aug 28 14:13:54.827: INFO: Got endpoints: latency-svc-n6w7s [1.412738056s]
Aug 28 14:13:54.941: INFO: Got endpoints: latency-svc-2t97r [981.137661ms]
Aug 28 14:13:55.017: INFO: Created: latency-svc-qh587
Aug 28 14:13:55.036: INFO: Created: latency-svc-pncp4
Aug 28 14:13:55.037: INFO: Got endpoints: latency-svc-qh587 [963.036664ms]
Aug 28 14:13:55.102: INFO: Got endpoints: latency-svc-pncp4 [921.997535ms]
Aug 28 14:13:55.127: INFO: Created: latency-svc-frbw4
Aug 28 14:13:55.167: INFO: Got endpoints: latency-svc-frbw4 [978.147049ms]
Aug 28 14:13:55.302: INFO: Created: latency-svc-5wcxg
Aug 28 14:13:55.314: INFO: Got endpoints: latency-svc-5wcxg [1.050922142s]
Aug 28 14:13:55.460: INFO: Created: latency-svc-6mz79
Aug 28 14:13:55.489: INFO: Created: latency-svc-hj6ps
Aug 28 14:13:55.489: INFO: Got endpoints: latency-svc-6mz79 [1.13720307s]
Aug 28 14:13:55.514: INFO: Got endpoints: latency-svc-hj6ps [1.124384841s]
Aug 28 14:13:55.551: INFO: Created: latency-svc-wqprt
Aug 28 14:13:55.624: INFO: Got endpoints: latency-svc-wqprt [1.203308378s]
Aug 28 14:13:55.644: INFO: Created: latency-svc-wcpqh
Aug 28 14:13:55.657: INFO: Got endpoints: latency-svc-wcpqh [1.157506307s]
Aug 28 14:13:55.680: INFO: Created: latency-svc-qpdfs
Aug 28 14:13:55.688: INFO: Got endpoints: latency-svc-qpdfs [1.163540964s]
Aug 28 14:13:55.709: INFO: Created: latency-svc-qbr7c
Aug 28 14:13:55.773: INFO: Got endpoints: latency-svc-qbr7c [1.214805713s]
Aug 28 14:13:55.784: INFO: Created: latency-svc-mzmd6
Aug 28 14:13:55.796: INFO: Got endpoints: latency-svc-mzmd6 [1.159943349s]
Aug 28 14:13:55.823: INFO: Created: latency-svc-9gzz4
Aug 28 14:13:55.839: INFO: Got endpoints: latency-svc-9gzz4 [1.147337971s]
Aug 28 14:13:55.860: INFO: Created: latency-svc-msv8v
Aug 28 14:13:55.870: INFO: Got endpoints: latency-svc-msv8v [1.093022519s]
Aug 28 14:13:55.943: INFO: Created: latency-svc-fsbrl
Aug 28 14:13:55.965: INFO: Got endpoints: latency-svc-fsbrl [1.137698469s]
Aug 28 14:13:56.021: INFO: Created: latency-svc-gswct
Aug 28 14:13:56.095: INFO: Got endpoints: latency-svc-gswct [1.1541222s]
Aug 28 14:13:56.099: INFO: Created: latency-svc-5fp44
Aug 28 14:13:56.164: INFO: Got endpoints: latency-svc-5fp44 [1.12715145s]
Aug 28 14:13:56.165: INFO: Latencies: [645.701472ms 743.436379ms 747.611653ms 774.911669ms 775.856916ms 776.104114ms 776.905278ms 791.740802ms 798.565041ms 807.797572ms 809.095509ms 815.085256ms 818.71661ms 835.410752ms 836.941081ms 839.124593ms 848.392313ms 850.054465ms 854.861874ms 855.656666ms 855.821119ms 859.981829ms 861.421182ms 871.646562ms 871.749222ms 873.841833ms 878.06431ms 878.452267ms 881.994279ms 889.206208ms 890.145959ms 892.12296ms 892.707571ms 894.57892ms 896.345586ms 898.142753ms 899.353589ms 904.043259ms 905.42201ms 908.158421ms 908.174027ms 909.316266ms 909.701899ms 909.843ms 909.871312ms 909.978313ms 914.345517ms 917.529269ms 918.801026ms 921.997535ms 925.262183ms 935.024653ms 937.410021ms 939.313195ms 940.419567ms 940.688851ms 940.706298ms 942.184907ms 947.369036ms 947.520371ms 949.489337ms 953.19828ms 953.867968ms 955.896551ms 961.768911ms 962.486015ms 963.036664ms 963.095125ms 963.587223ms 969.307662ms 970.32059ms 972.143237ms 973.783745ms 974.306219ms 976.454469ms 978.147049ms 981.137661ms 984.764927ms 986.39985ms 988.006208ms 989.198806ms 993.006127ms 994.015095ms 995.573662ms 997.520914ms 997.883427ms 1.004600687s 1.006333226s 1.007519573s 1.010709791s 1.011673387s 1.01668676s 1.018350656s 1.020048032s 1.027448534s 1.048505677s 1.049632385s 1.050922142s 1.051060232s 1.058325743s 1.063345764s 1.070935249s 1.093022519s 1.111856853s 1.117713689s 1.122152989s 1.124384841s 1.12715145s 1.131803199s 1.13720307s 1.137684663s 1.137698469s 1.144948781s 1.147337971s 1.1541222s 1.157506307s 1.159943349s 1.163540964s 1.166367866s 1.180834929s 1.191841378s 1.194129585s 1.201025292s 1.203308378s 1.20560587s 1.206004978s 1.214805713s 1.214828021s 1.215617701s 1.228248455s 1.231491861s 1.23334393s 1.240543104s 1.242727456s 1.242914661s 1.255206014s 1.257218394s 1.260615556s 1.263919125s 1.265867718s 1.275439969s 1.310502045s 1.3160777s 1.364959817s 1.379520122s 1.38813595s 1.412738056s 1.464435945s 1.470327577s 1.505418009s 1.527275575s 1.567151389s 1.568908806s 1.59158245s 1.592791247s 1.611584277s 1.623133969s 1.653926735s 1.708097746s 1.748190432s 1.760821035s 1.76798641s 1.819151747s 1.843944805s 1.848898592s 1.928186062s 1.944801021s 1.981538714s 2.011185994s 2.055424344s 2.087213584s 2.102394357s 2.132718707s 2.160724365s 2.187754369s 2.204237023s 2.254100158s 2.259334652s 2.260488745s 2.263557567s 2.28168496s 2.291056475s 2.307696742s 2.32874042s 2.337583431s 2.340027886s 2.395362974s 2.429120409s 2.450872655s 2.499999476s 2.678210638s 2.840162365s 2.875849462s 3.130035594s 3.141051886s 3.223683569s 3.288729874s 3.302869934s 3.306609172s 3.316578023s]
Aug 28 14:13:56.166: INFO: 50 %ile: 1.063345764s
Aug 28 14:13:56.166: INFO: 90 %ile: 2.28168496s
Aug 28 14:13:56.166: INFO: 99 %ile: 3.306609172s
Aug 28 14:13:56.166: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:13:56.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-1258" for this suite.

• [SLOW TEST:26.641 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":161,"skipped":2707,"failed":0}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:13:56.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 28 14:13:56.465: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:13:56.467: INFO: Number of nodes with available pods: 0
Aug 28 14:13:56.467: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:13:57.481: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:13:57.489: INFO: Number of nodes with available pods: 0
Aug 28 14:13:57.489: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:13:59.068: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:13:59.120: INFO: Number of nodes with available pods: 0
Aug 28 14:13:59.120: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:13:59.692: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:13:59.711: INFO: Number of nodes with available pods: 0
Aug 28 14:13:59.712: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:14:00.670: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:00.675: INFO: Number of nodes with available pods: 0
Aug 28 14:14:00.676: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:14:01.537: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:01.603: INFO: Number of nodes with available pods: 1
Aug 28 14:14:01.603: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:14:02.545: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:02.621: INFO: Number of nodes with available pods: 2
Aug 28 14:14:02.621: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 28 14:14:03.264: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:03.385: INFO: Number of nodes with available pods: 1
Aug 28 14:14:03.385: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:14:04.414: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:05.194: INFO: Number of nodes with available pods: 1
Aug 28 14:14:05.194: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:14:05.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:05.967: INFO: Number of nodes with available pods: 1
Aug 28 14:14:05.967: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:14:06.405: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:06.456: INFO: Number of nodes with available pods: 1
Aug 28 14:14:06.456: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:14:07.601: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:07.777: INFO: Number of nodes with available pods: 1
Aug 28 14:14:07.777: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:14:08.449: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:08.509: INFO: Number of nodes with available pods: 1
Aug 28 14:14:08.509: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:14:09.399: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:14:09.443: INFO: Number of nodes with available pods: 2
Aug 28 14:14:09.443: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7396, will wait for the garbage collector to delete the pods
Aug 28 14:14:09.634: INFO: Deleting DaemonSet.extensions daemon-set took: 101.37116ms
Aug 28 14:14:09.735: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.590238ms
Aug 28 14:14:18.078: INFO: Number of nodes with available pods: 0
Aug 28 14:14:18.078: INFO: Number of running nodes: 0, number of available pods: 0
Aug 28 14:14:18.132: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7396/daemonsets","resourceVersion":"1770415"},"items":null}

Aug 28 14:14:18.173: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7396/pods","resourceVersion":"1770416"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:14:18.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7396" for this suite.

• [SLOW TEST:22.065 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":162,"skipped":2710,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:14:18.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:14:18.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2491" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":163,"skipped":2725,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:14:19.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 28 14:14:19.786: INFO: >>> kubeConfig: /root/.kube/config
Aug 28 14:14:40.438: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:15:52.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3631" for this suite.

• [SLOW TEST:93.499 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":164,"skipped":2750,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:15:52.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 28 14:16:01.064: INFO: &Pod{ObjectMeta:{send-events-37bab1bd-6f1f-431e-931f-143c3fd16d13  events-6280 /api/v1/namespaces/events-6280/pods/send-events-37bab1bd-6f1f-431e-931f-143c3fd16d13 434b07a6-61c1-49dd-9763-1dfe444b8433 1771406 0 2020-08-28 14:15:53 +0000 UTC   map[name:foo time:602644] map[] [] []  [{e2e.test Update v1 2020-08-28 14:15:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:16:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x78rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x78rp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x78rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:15:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:16:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:16:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:15:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.29,StartTime:2020-08-28 14:15:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:15:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://5e5109e2cea0de193685e0ecaf8c79baf70211c0c4253a71f41b0b24e37b6eb3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.29,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 28 14:16:03.076: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 28 14:16:05.084: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:16:05.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6280" for this suite.

• [SLOW TEST:12.323 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":165,"skipped":2756,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:16:05.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 28 14:16:05.412: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 28 14:16:05.563: INFO: Waiting for terminating namespaces to be deleted...
Aug 28 14:16:05.567: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 28 14:16:05.595: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 28 14:16:05.595: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 14:16:05.595: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 28 14:16:05.595: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 14:16:05.595: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 28 14:16:05.595: INFO: 	Container app ready: true, restart count 0
Aug 28 14:16:05.595: INFO: send-events-37bab1bd-6f1f-431e-931f-143c3fd16d13 from events-6280 started at 2020-08-28 14:15:53 +0000 UTC (1 container statuses recorded)
Aug 28 14:16:05.595: INFO: 	Container p ready: true, restart count 0
Aug 28 14:16:05.595: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 28 14:16:05.624: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 28 14:16:05.624: INFO: 	Container app ready: true, restart count 0
Aug 28 14:16:05.624: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 28 14:16:05.624: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 14:16:05.624: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 28 14:16:05.624: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-d616c963-57d8-4e3b-ad3e-67cb33690bf9 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-d616c963-57d8-4e3b-ad3e-67cb33690bf9 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-d616c963-57d8-4e3b-ad3e-67cb33690bf9
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:16:15.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2155" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:10.662 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":166,"skipped":2761,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:16:15.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-dad5ad96-74af-4aeb-a4f8-448559a968e6
STEP: Creating a pod to test consume configMaps
Aug 28 14:16:15.991: INFO: Waiting up to 5m0s for pod "pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e" in namespace "configmap-9500" to be "Succeeded or Failed"
Aug 28 14:16:16.072: INFO: Pod "pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 80.531518ms
Aug 28 14:16:18.899: INFO: Pod "pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.907663036s
Aug 28 14:16:21.160: INFO: Pod "pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.168676329s
Aug 28 14:16:23.164: INFO: Pod "pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.172447272s
Aug 28 14:16:25.476: INFO: Pod "pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.485105156s
Aug 28 14:16:27.482: INFO: Pod "pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.490956864s
STEP: Saw pod success
Aug 28 14:16:27.482: INFO: Pod "pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e" satisfied condition "Succeeded or Failed"
Aug 28 14:16:27.486: INFO: Trying to get logs from node kali-worker pod pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e container configmap-volume-test: 
STEP: delete the pod
Aug 28 14:16:27.588: INFO: Waiting for pod pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e to disappear
Aug 28 14:16:27.600: INFO: Pod pod-configmaps-02543fc3-c184-4578-9ce9-1c66efad7a8e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:16:27.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9500" for this suite.

• [SLOW TEST:11.785 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2766,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:16:27.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-326
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-326
STEP: creating replication controller externalsvc in namespace services-326
I0828 14:16:27.945154      11 runners.go:190] Created replication controller with name: externalsvc, namespace: services-326, replica count: 2
I0828 14:16:30.996395      11 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:16:33.997371      11 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:16:36.997984      11 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:16:39.998428      11 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 28 14:16:43.187: INFO: Creating new exec pod
Aug 28 14:16:51.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-326 execpodvhzkz -- /bin/sh -x -c nslookup nodeport-service'
Aug 28 14:16:58.087: INFO: stderr: "I0828 14:16:57.972305    3197 log.go:172] (0x4000b02bb0) (0x40007f5540) Create stream\nI0828 14:16:57.975894    3197 log.go:172] (0x4000b02bb0) (0x40007f5540) Stream added, broadcasting: 1\nI0828 14:16:57.987599    3197 log.go:172] (0x4000b02bb0) Reply frame received for 1\nI0828 14:16:57.988160    3197 log.go:172] (0x4000b02bb0) (0x40007f5680) Create stream\nI0828 14:16:57.988216    3197 log.go:172] (0x4000b02bb0) (0x40007f5680) Stream added, broadcasting: 3\nI0828 14:16:57.989543    3197 log.go:172] (0x4000b02bb0) Reply frame received for 3\nI0828 14:16:57.989832    3197 log.go:172] (0x4000b02bb0) (0x4000c940a0) Create stream\nI0828 14:16:57.989909    3197 log.go:172] (0x4000b02bb0) (0x4000c940a0) Stream added, broadcasting: 5\nI0828 14:16:57.991928    3197 log.go:172] (0x4000b02bb0) Reply frame received for 5\nI0828 14:16:58.064514    3197 log.go:172] (0x4000b02bb0) Data frame received for 5\nI0828 14:16:58.064704    3197 log.go:172] (0x4000c940a0) (5) Data frame handling\nI0828 14:16:58.065216    3197 log.go:172] (0x4000c940a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0828 14:16:58.067286    3197 log.go:172] (0x4000b02bb0) Data frame received for 3\nI0828 14:16:58.067352    3197 log.go:172] (0x40007f5680) (3) Data frame handling\nI0828 14:16:58.067418    3197 log.go:172] (0x40007f5680) (3) Data frame sent\nI0828 14:16:58.068097    3197 log.go:172] (0x4000b02bb0) Data frame received for 3\nI0828 14:16:58.068170    3197 log.go:172] (0x40007f5680) (3) Data frame handling\nI0828 14:16:58.068276    3197 log.go:172] (0x40007f5680) (3) Data frame sent\nI0828 14:16:58.068543    3197 log.go:172] (0x4000b02bb0) Data frame received for 3\nI0828 14:16:58.068603    3197 log.go:172] (0x40007f5680) (3) Data frame handling\nI0828 14:16:58.068952    3197 log.go:172] (0x4000b02bb0) Data frame received for 5\nI0828 14:16:58.069042    3197 log.go:172] (0x4000c940a0) (5) Data frame handling\nI0828 14:16:58.070065    3197 log.go:172] (0x4000b02bb0) Data frame received for 1\nI0828 14:16:58.070119    3197 log.go:172] (0x40007f5540) (1) Data frame handling\nI0828 14:16:58.070175    3197 log.go:172] (0x40007f5540) (1) Data frame sent\nI0828 14:16:58.071537    3197 log.go:172] (0x4000b02bb0) (0x40007f5540) Stream removed, broadcasting: 1\nI0828 14:16:58.073280    3197 log.go:172] (0x4000b02bb0) Go away received\nI0828 14:16:58.074753    3197 log.go:172] (0x4000b02bb0) (0x40007f5540) Stream removed, broadcasting: 1\nI0828 14:16:58.075161    3197 log.go:172] (0x4000b02bb0) (0x40007f5680) Stream removed, broadcasting: 3\nI0828 14:16:58.075443    3197 log.go:172] (0x4000b02bb0) (0x4000c940a0) Stream removed, broadcasting: 5\n"
Aug 28 14:16:58.088: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-326.svc.cluster.local\tcanonical name = externalsvc.services-326.svc.cluster.local.\nName:\texternalsvc.services-326.svc.cluster.local\nAddress: 10.101.191.129\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-326, will wait for the garbage collector to delete the pods
Aug 28 14:16:58.882: INFO: Deleting ReplicationController externalsvc took: 659.910211ms
Aug 28 14:16:59.883: INFO: Terminating ReplicationController externalsvc pods took: 1.000913038s
Aug 28 14:17:18.722: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:17:18.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-326" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:51.591 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":168,"skipped":2776,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:17:19.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
Aug 28 14:17:20.815: INFO: Waiting up to 5m0s for pod "client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f" in namespace "containers-925" to be "Succeeded or Failed"
Aug 28 14:17:20.887: INFO: Pod "client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f": Phase="Pending", Reason="", readiness=false. Elapsed: 71.337545ms
Aug 28 14:17:22.893: INFO: Pod "client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077971138s
Aug 28 14:17:24.935: INFO: Pod "client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119880401s
Aug 28 14:17:27.364: INFO: Pod "client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.548917877s
Aug 28 14:17:29.556: INFO: Pod "client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.740401318s
STEP: Saw pod success
Aug 28 14:17:29.556: INFO: Pod "client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f" satisfied condition "Succeeded or Failed"
Aug 28 14:17:29.602: INFO: Trying to get logs from node kali-worker pod client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f container test-container: 
STEP: delete the pod
Aug 28 14:17:29.731: INFO: Waiting for pod client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f to disappear
Aug 28 14:17:29.769: INFO: Pod client-containers-4e57361a-69ed-4149-8f0f-72d2b386407f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:17:29.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-925" for this suite.

• [SLOW TEST:10.577 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2794,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:17:29.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:17:30.694: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:17:32.802: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:17:34.893: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:17:36.737: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:17:38.832: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:40.701: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:42.701: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:44.730: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:46.701: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:49.055: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:50.701: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:52.701: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:54.701: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:56.703: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:17:58.699: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = false)
Aug 28 14:18:00.699: INFO: The status of Pod test-webserver-188685a8-b955-4b31-9cae-73af15c7e101 is Running (Ready = true)
Aug 28 14:18:00.704: INFO: Container started at 2020-08-28 14:17:37 +0000 UTC, pod became ready at 2020-08-28 14:17:59 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:18:00.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9366" for this suite.

• [SLOW TEST:30.932 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2829,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:18:00.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:18:06.709: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:18:10.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221088, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221085, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:18:12.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221088, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221085, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:18:14.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221088, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221085, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:18:16.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221088, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221085, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:18:18.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221088, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221085, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:18:20.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221086, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221088, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221085, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:18:24.101: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 28 14:18:30.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config attach --namespace=webhook-7350 to-be-attached-pod -i -c=container1'
Aug 28 14:18:32.033: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:18:32.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7350" for this suite.
STEP: Destroying namespace "webhook-7350-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:31.819 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":171,"skipped":2832,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:18:32.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-189188fa-976a-4d59-96fc-af65d09e19fa
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-189188fa-976a-4d59-96fc-af65d09e19fa
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:20:10.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9991" for this suite.

• [SLOW TEST:97.635 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2848,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:20:10.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 28 14:20:37.529: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:37.530: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:37.796819      11 log.go:172] (0x40031d4420) (0x4000910500) Create stream
I0828 14:20:37.796923      11 log.go:172] (0x40031d4420) (0x4000910500) Stream added, broadcasting: 1
I0828 14:20:37.799585      11 log.go:172] (0x40031d4420) Reply frame received for 1
I0828 14:20:37.799721      11 log.go:172] (0x40031d4420) (0x4000996320) Create stream
I0828 14:20:37.799793      11 log.go:172] (0x40031d4420) (0x4000996320) Stream added, broadcasting: 3
I0828 14:20:37.801104      11 log.go:172] (0x40031d4420) Reply frame received for 3
I0828 14:20:37.801275      11 log.go:172] (0x40031d4420) (0x40025c0c80) Create stream
I0828 14:20:37.801372      11 log.go:172] (0x40031d4420) (0x40025c0c80) Stream added, broadcasting: 5
I0828 14:20:37.802659      11 log.go:172] (0x40031d4420) Reply frame received for 5
I0828 14:20:37.860048      11 log.go:172] (0x40031d4420) Data frame received for 5
I0828 14:20:37.860176      11 log.go:172] (0x40025c0c80) (5) Data frame handling
I0828 14:20:37.860303      11 log.go:172] (0x40031d4420) Data frame received for 3
I0828 14:20:37.860409      11 log.go:172] (0x4000996320) (3) Data frame handling
I0828 14:20:37.860547      11 log.go:172] (0x4000996320) (3) Data frame sent
I0828 14:20:37.860643      11 log.go:172] (0x40031d4420) Data frame received for 3
I0828 14:20:37.860720      11 log.go:172] (0x4000996320) (3) Data frame handling
I0828 14:20:37.861433      11 log.go:172] (0x40031d4420) Data frame received for 1
I0828 14:20:37.861540      11 log.go:172] (0x4000910500) (1) Data frame handling
I0828 14:20:37.861676      11 log.go:172] (0x4000910500) (1) Data frame sent
I0828 14:20:37.861780      11 log.go:172] (0x40031d4420) (0x4000910500) Stream removed, broadcasting: 1
I0828 14:20:37.861878      11 log.go:172] (0x40031d4420) Go away received
I0828 14:20:37.862106      11 log.go:172] (0x40031d4420) (0x4000910500) Stream removed, broadcasting: 1
I0828 14:20:37.862184      11 log.go:172] (0x40031d4420) (0x4000996320) Stream removed, broadcasting: 3
I0828 14:20:37.862252      11 log.go:172] (0x40031d4420) (0x40025c0c80) Stream removed, broadcasting: 5
Aug 28 14:20:37.862: INFO: Exec stderr: ""
Aug 28 14:20:37.862: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:37.862: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:38.019478      11 log.go:172] (0x400132c580) (0x40025c1360) Create stream
I0828 14:20:38.019662      11 log.go:172] (0x400132c580) (0x40025c1360) Stream added, broadcasting: 1
I0828 14:20:38.025694      11 log.go:172] (0x400132c580) Reply frame received for 1
I0828 14:20:38.025868      11 log.go:172] (0x400132c580) (0x4000107360) Create stream
I0828 14:20:38.025936      11 log.go:172] (0x400132c580) (0x4000107360) Stream added, broadcasting: 3
I0828 14:20:38.027235      11 log.go:172] (0x400132c580) Reply frame received for 3
I0828 14:20:38.027354      11 log.go:172] (0x400132c580) (0x4002056000) Create stream
I0828 14:20:38.027403      11 log.go:172] (0x400132c580) (0x4002056000) Stream added, broadcasting: 5
I0828 14:20:38.028611      11 log.go:172] (0x400132c580) Reply frame received for 5
I0828 14:20:38.096663      11 log.go:172] (0x400132c580) Data frame received for 3
I0828 14:20:38.096830      11 log.go:172] (0x4000107360) (3) Data frame handling
I0828 14:20:38.096906      11 log.go:172] (0x4000107360) (3) Data frame sent
I0828 14:20:38.096972      11 log.go:172] (0x400132c580) Data frame received for 3
I0828 14:20:38.097045      11 log.go:172] (0x4000107360) (3) Data frame handling
I0828 14:20:38.097218      11 log.go:172] (0x400132c580) Data frame received for 5
I0828 14:20:38.097410      11 log.go:172] (0x4002056000) (5) Data frame handling
I0828 14:20:38.098019      11 log.go:172] (0x400132c580) Data frame received for 1
I0828 14:20:38.098094      11 log.go:172] (0x40025c1360) (1) Data frame handling
I0828 14:20:38.098190      11 log.go:172] (0x40025c1360) (1) Data frame sent
I0828 14:20:38.098275      11 log.go:172] (0x400132c580) (0x40025c1360) Stream removed, broadcasting: 1
I0828 14:20:38.098527      11 log.go:172] (0x400132c580) Go away received
I0828 14:20:38.098663      11 log.go:172] (0x400132c580) (0x40025c1360) Stream removed, broadcasting: 1
I0828 14:20:38.098735      11 log.go:172] (0x400132c580) (0x4000107360) Stream removed, broadcasting: 3
I0828 14:20:38.098800      11 log.go:172] (0x400132c580) (0x4002056000) Stream removed, broadcasting: 5
Aug 28 14:20:38.098: INFO: Exec stderr: ""
Aug 28 14:20:38.099: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:38.099: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:38.147735      11 log.go:172] (0x4003148580) (0x4002056780) Create stream
I0828 14:20:38.147832      11 log.go:172] (0x4003148580) (0x4002056780) Stream added, broadcasting: 1
I0828 14:20:38.150167      11 log.go:172] (0x4003148580) Reply frame received for 1
I0828 14:20:38.150319      11 log.go:172] (0x4003148580) (0x40025c1540) Create stream
I0828 14:20:38.150386      11 log.go:172] (0x4003148580) (0x40025c1540) Stream added, broadcasting: 3
I0828 14:20:38.151426      11 log.go:172] (0x4003148580) Reply frame received for 3
I0828 14:20:38.151567      11 log.go:172] (0x4003148580) (0x4002056820) Create stream
I0828 14:20:38.151684      11 log.go:172] (0x4003148580) (0x4002056820) Stream added, broadcasting: 5
I0828 14:20:38.152953      11 log.go:172] (0x4003148580) Reply frame received for 5
I0828 14:20:38.211634      11 log.go:172] (0x4003148580) Data frame received for 3
I0828 14:20:38.211745      11 log.go:172] (0x40025c1540) (3) Data frame handling
I0828 14:20:38.211813      11 log.go:172] (0x40025c1540) (3) Data frame sent
I0828 14:20:38.211873      11 log.go:172] (0x4003148580) Data frame received for 3
I0828 14:20:38.211925      11 log.go:172] (0x40025c1540) (3) Data frame handling
I0828 14:20:38.211982      11 log.go:172] (0x4003148580) Data frame received for 5
I0828 14:20:38.212090      11 log.go:172] (0x4002056820) (5) Data frame handling
I0828 14:20:38.213009      11 log.go:172] (0x4003148580) Data frame received for 1
I0828 14:20:38.213073      11 log.go:172] (0x4002056780) (1) Data frame handling
I0828 14:20:38.213141      11 log.go:172] (0x4002056780) (1) Data frame sent
I0828 14:20:38.213199      11 log.go:172] (0x4003148580) (0x4002056780) Stream removed, broadcasting: 1
I0828 14:20:38.213260      11 log.go:172] (0x4003148580) Go away received
I0828 14:20:38.213477      11 log.go:172] (0x4003148580) (0x4002056780) Stream removed, broadcasting: 1
I0828 14:20:38.213563      11 log.go:172] (0x4003148580) (0x40025c1540) Stream removed, broadcasting: 3
I0828 14:20:38.213648      11 log.go:172] (0x4003148580) (0x4002056820) Stream removed, broadcasting: 5
Aug 28 14:20:38.213: INFO: Exec stderr: ""
Aug 28 14:20:38.213: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:38.213: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:38.273441      11 log.go:172] (0x4003148bb0) (0x4002056aa0) Create stream
I0828 14:20:38.273590      11 log.go:172] (0x4003148bb0) (0x4002056aa0) Stream added, broadcasting: 1
I0828 14:20:38.276764      11 log.go:172] (0x4003148bb0) Reply frame received for 1
I0828 14:20:38.276942      11 log.go:172] (0x4003148bb0) (0x4000910be0) Create stream
I0828 14:20:38.277036      11 log.go:172] (0x4003148bb0) (0x4000910be0) Stream added, broadcasting: 3
I0828 14:20:38.278290      11 log.go:172] (0x4003148bb0) Reply frame received for 3
I0828 14:20:38.278449      11 log.go:172] (0x4003148bb0) (0x40025c15e0) Create stream
I0828 14:20:38.278507      11 log.go:172] (0x4003148bb0) (0x40025c15e0) Stream added, broadcasting: 5
I0828 14:20:38.279494      11 log.go:172] (0x4003148bb0) Reply frame received for 5
I0828 14:20:38.358822      11 log.go:172] (0x4003148bb0) Data frame received for 3
I0828 14:20:38.358941      11 log.go:172] (0x4000910be0) (3) Data frame handling
I0828 14:20:38.359066      11 log.go:172] (0x4003148bb0) Data frame received for 5
I0828 14:20:38.359185      11 log.go:172] (0x40025c15e0) (5) Data frame handling
I0828 14:20:38.359336      11 log.go:172] (0x4000910be0) (3) Data frame sent
I0828 14:20:38.359461      11 log.go:172] (0x4003148bb0) Data frame received for 3
I0828 14:20:38.359578      11 log.go:172] (0x4000910be0) (3) Data frame handling
I0828 14:20:38.359747      11 log.go:172] (0x4003148bb0) Data frame received for 1
I0828 14:20:38.359848      11 log.go:172] (0x4002056aa0) (1) Data frame handling
I0828 14:20:38.359926      11 log.go:172] (0x4002056aa0) (1) Data frame sent
I0828 14:20:38.360067      11 log.go:172] (0x4003148bb0) (0x4002056aa0) Stream removed, broadcasting: 1
I0828 14:20:38.360198      11 log.go:172] (0x4003148bb0) Go away received
I0828 14:20:38.360433      11 log.go:172] (0x4003148bb0) (0x4002056aa0) Stream removed, broadcasting: 1
I0828 14:20:38.360579      11 log.go:172] (0x4003148bb0) (0x4000910be0) Stream removed, broadcasting: 3
I0828 14:20:38.360815      11 log.go:172] (0x4003148bb0) (0x40025c15e0) Stream removed, broadcasting: 5
Aug 28 14:20:38.360: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 28 14:20:38.361: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:38.361: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:38.410810      11 log.go:172] (0x40033f0630) (0x40009972c0) Create stream
I0828 14:20:38.410980      11 log.go:172] (0x40033f0630) (0x40009972c0) Stream added, broadcasting: 1
I0828 14:20:38.413727      11 log.go:172] (0x40033f0630) Reply frame received for 1
I0828 14:20:38.413848      11 log.go:172] (0x40033f0630) (0x4000997400) Create stream
I0828 14:20:38.413912      11 log.go:172] (0x40033f0630) (0x4000997400) Stream added, broadcasting: 3
I0828 14:20:38.415055      11 log.go:172] (0x40033f0630) Reply frame received for 3
I0828 14:20:38.415153      11 log.go:172] (0x40033f0630) (0x4002fc4140) Create stream
I0828 14:20:38.415207      11 log.go:172] (0x40033f0630) (0x4002fc4140) Stream added, broadcasting: 5
I0828 14:20:38.416168      11 log.go:172] (0x40033f0630) Reply frame received for 5
I0828 14:20:38.472867      11 log.go:172] (0x40033f0630) Data frame received for 3
I0828 14:20:38.472944      11 log.go:172] (0x4000997400) (3) Data frame handling
I0828 14:20:38.472998      11 log.go:172] (0x4000997400) (3) Data frame sent
I0828 14:20:38.473050      11 log.go:172] (0x40033f0630) Data frame received for 3
I0828 14:20:38.473112      11 log.go:172] (0x4000997400) (3) Data frame handling
I0828 14:20:38.473284      11 log.go:172] (0x40033f0630) Data frame received for 5
I0828 14:20:38.473433      11 log.go:172] (0x4002fc4140) (5) Data frame handling
I0828 14:20:38.473879      11 log.go:172] (0x40033f0630) Data frame received for 1
I0828 14:20:38.473936      11 log.go:172] (0x40009972c0) (1) Data frame handling
I0828 14:20:38.473998      11 log.go:172] (0x40009972c0) (1) Data frame sent
I0828 14:20:38.474065      11 log.go:172] (0x40033f0630) (0x40009972c0) Stream removed, broadcasting: 1
I0828 14:20:38.474148      11 log.go:172] (0x40033f0630) Go away received
I0828 14:20:38.474435      11 log.go:172] (0x40033f0630) (0x40009972c0) Stream removed, broadcasting: 1
I0828 14:20:38.474523      11 log.go:172] (0x40033f0630) (0x4000997400) Stream removed, broadcasting: 3
I0828 14:20:38.474607      11 log.go:172] (0x40033f0630) (0x4002fc4140) Stream removed, broadcasting: 5
Aug 28 14:20:38.474: INFO: Exec stderr: ""
Aug 28 14:20:38.474: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:38.474: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:38.539969      11 log.go:172] (0x40031d4bb0) (0x40009114a0) Create stream
I0828 14:20:38.540056      11 log.go:172] (0x40031d4bb0) (0x40009114a0) Stream added, broadcasting: 1
I0828 14:20:38.542736      11 log.go:172] (0x40031d4bb0) Reply frame received for 1
I0828 14:20:38.542912      11 log.go:172] (0x40031d4bb0) (0x4000911540) Create stream
I0828 14:20:38.542980      11 log.go:172] (0x40031d4bb0) (0x4000911540) Stream added, broadcasting: 3
I0828 14:20:38.543954      11 log.go:172] (0x40031d4bb0) Reply frame received for 3
I0828 14:20:38.544034      11 log.go:172] (0x40031d4bb0) (0x4002fc4640) Create stream
I0828 14:20:38.544074      11 log.go:172] (0x40031d4bb0) (0x4002fc4640) Stream added, broadcasting: 5
I0828 14:20:38.544956      11 log.go:172] (0x40031d4bb0) Reply frame received for 5
I0828 14:20:38.603630      11 log.go:172] (0x40031d4bb0) Data frame received for 5
I0828 14:20:38.603736      11 log.go:172] (0x4002fc4640) (5) Data frame handling
I0828 14:20:38.603859      11 log.go:172] (0x40031d4bb0) Data frame received for 3
I0828 14:20:38.603969      11 log.go:172] (0x4000911540) (3) Data frame handling
I0828 14:20:38.604125      11 log.go:172] (0x4000911540) (3) Data frame sent
I0828 14:20:38.604227      11 log.go:172] (0x40031d4bb0) Data frame received for 3
I0828 14:20:38.604299      11 log.go:172] (0x4000911540) (3) Data frame handling
I0828 14:20:38.604426      11 log.go:172] (0x40031d4bb0) Data frame received for 1
I0828 14:20:38.604475      11 log.go:172] (0x40009114a0) (1) Data frame handling
I0828 14:20:38.604524      11 log.go:172] (0x40009114a0) (1) Data frame sent
I0828 14:20:38.604582      11 log.go:172] (0x40031d4bb0) (0x40009114a0) Stream removed, broadcasting: 1
I0828 14:20:38.604645      11 log.go:172] (0x40031d4bb0) Go away received
I0828 14:20:38.605206      11 log.go:172] (0x40031d4bb0) (0x40009114a0) Stream removed, broadcasting: 1
I0828 14:20:38.605313      11 log.go:172] (0x40031d4bb0) (0x4000911540) Stream removed, broadcasting: 3
I0828 14:20:38.605403      11 log.go:172] (0x40031d4bb0) (0x4002fc4640) Stream removed, broadcasting: 5
Aug 28 14:20:38.605: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 28 14:20:38.605: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:38.605: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:38.653457      11 log.go:172] (0x400132cbb0) (0x40025c1900) Create stream
I0828 14:20:38.653572      11 log.go:172] (0x400132cbb0) (0x40025c1900) Stream added, broadcasting: 1
I0828 14:20:38.656044      11 log.go:172] (0x400132cbb0) Reply frame received for 1
I0828 14:20:38.656188      11 log.go:172] (0x400132cbb0) (0x40009115e0) Create stream
I0828 14:20:38.656252      11 log.go:172] (0x400132cbb0) (0x40009115e0) Stream added, broadcasting: 3
I0828 14:20:38.657283      11 log.go:172] (0x400132cbb0) Reply frame received for 3
I0828 14:20:38.657369      11 log.go:172] (0x400132cbb0) (0x40025c19a0) Create stream
I0828 14:20:38.657414      11 log.go:172] (0x400132cbb0) (0x40025c19a0) Stream added, broadcasting: 5
I0828 14:20:38.658288      11 log.go:172] (0x400132cbb0) Reply frame received for 5
I0828 14:20:38.721774      11 log.go:172] (0x400132cbb0) Data frame received for 3
I0828 14:20:38.721892      11 log.go:172] (0x40009115e0) (3) Data frame handling
I0828 14:20:38.721970      11 log.go:172] (0x400132cbb0) Data frame received for 5
I0828 14:20:38.722055      11 log.go:172] (0x40025c19a0) (5) Data frame handling
I0828 14:20:38.722109      11 log.go:172] (0x40009115e0) (3) Data frame sent
I0828 14:20:38.722171      11 log.go:172] (0x400132cbb0) Data frame received for 3
I0828 14:20:38.722216      11 log.go:172] (0x40009115e0) (3) Data frame handling
I0828 14:20:38.722815      11 log.go:172] (0x400132cbb0) Data frame received for 1
I0828 14:20:38.722881      11 log.go:172] (0x40025c1900) (1) Data frame handling
I0828 14:20:38.722961      11 log.go:172] (0x40025c1900) (1) Data frame sent
I0828 14:20:38.723037      11 log.go:172] (0x400132cbb0) (0x40025c1900) Stream removed, broadcasting: 1
I0828 14:20:38.723121      11 log.go:172] (0x400132cbb0) Go away received
I0828 14:20:38.723520      11 log.go:172] (0x400132cbb0) (0x40025c1900) Stream removed, broadcasting: 1
I0828 14:20:38.723632      11 log.go:172] (0x400132cbb0) (0x40009115e0) Stream removed, broadcasting: 3
I0828 14:20:38.723703      11 log.go:172] (0x400132cbb0) (0x40025c19a0) Stream removed, broadcasting: 5
Aug 28 14:20:38.723: INFO: Exec stderr: ""
Aug 28 14:20:38.723: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:38.723: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:38.778847      11 log.go:172] (0x40033f09a0) (0x4000997d60) Create stream
I0828 14:20:38.779010      11 log.go:172] (0x40033f09a0) (0x4000997d60) Stream added, broadcasting: 1
I0828 14:20:38.783817      11 log.go:172] (0x40033f09a0) Reply frame received for 1
I0828 14:20:38.783934      11 log.go:172] (0x40033f09a0) (0x40009117c0) Create stream
I0828 14:20:38.783981      11 log.go:172] (0x40033f09a0) (0x40009117c0) Stream added, broadcasting: 3
I0828 14:20:38.785018      11 log.go:172] (0x40033f09a0) Reply frame received for 3
I0828 14:20:38.785089      11 log.go:172] (0x40033f09a0) (0x40025c1ae0) Create stream
I0828 14:20:38.785139      11 log.go:172] (0x40033f09a0) (0x40025c1ae0) Stream added, broadcasting: 5
I0828 14:20:38.786312      11 log.go:172] (0x40033f09a0) Reply frame received for 5
I0828 14:20:38.858910      11 log.go:172] (0x40033f09a0) Data frame received for 5
I0828 14:20:38.859029      11 log.go:172] (0x40025c1ae0) (5) Data frame handling
I0828 14:20:38.859119      11 log.go:172] (0x40033f09a0) Data frame received for 3
I0828 14:20:38.859202      11 log.go:172] (0x40009117c0) (3) Data frame handling
I0828 14:20:38.859279      11 log.go:172] (0x40009117c0) (3) Data frame sent
I0828 14:20:38.859363      11 log.go:172] (0x40033f09a0) Data frame received for 3
I0828 14:20:38.859441      11 log.go:172] (0x40009117c0) (3) Data frame handling
I0828 14:20:38.860141      11 log.go:172] (0x40033f09a0) Data frame received for 1
I0828 14:20:38.860193      11 log.go:172] (0x4000997d60) (1) Data frame handling
I0828 14:20:38.860262      11 log.go:172] (0x4000997d60) (1) Data frame sent
I0828 14:20:38.860324      11 log.go:172] (0x40033f09a0) (0x4000997d60) Stream removed, broadcasting: 1
I0828 14:20:38.860394      11 log.go:172] (0x40033f09a0) Go away received
I0828 14:20:38.860629      11 log.go:172] (0x40033f09a0) (0x4000997d60) Stream removed, broadcasting: 1
I0828 14:20:38.860698      11 log.go:172] (0x40033f09a0) (0x40009117c0) Stream removed, broadcasting: 3
I0828 14:20:38.860847      11 log.go:172] (0x40033f09a0) (0x40025c1ae0) Stream removed, broadcasting: 5
Aug 28 14:20:38.860: INFO: Exec stderr: ""
Aug 28 14:20:38.861: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:38.861: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:38.915082      11 log.go:172] (0x40031d5550) (0x4000efe0a0) Create stream
I0828 14:20:38.915201      11 log.go:172] (0x40031d5550) (0x4000efe0a0) Stream added, broadcasting: 1
I0828 14:20:38.917960      11 log.go:172] (0x40031d5550) Reply frame received for 1
I0828 14:20:38.918123      11 log.go:172] (0x40031d5550) (0x4002fc4780) Create stream
I0828 14:20:38.918196      11 log.go:172] (0x40031d5550) (0x4002fc4780) Stream added, broadcasting: 3
I0828 14:20:38.919651      11 log.go:172] (0x40031d5550) Reply frame received for 3
I0828 14:20:38.919837      11 log.go:172] (0x40031d5550) (0x4000efe280) Create stream
I0828 14:20:38.919941      11 log.go:172] (0x40031d5550) (0x4000efe280) Stream added, broadcasting: 5
I0828 14:20:38.921399      11 log.go:172] (0x40031d5550) Reply frame received for 5
I0828 14:20:38.990701      11 log.go:172] (0x40031d5550) Data frame received for 3
I0828 14:20:38.990834      11 log.go:172] (0x4002fc4780) (3) Data frame handling
I0828 14:20:38.990981      11 log.go:172] (0x40031d5550) Data frame received for 5
I0828 14:20:38.991078      11 log.go:172] (0x4000efe280) (5) Data frame handling
I0828 14:20:38.991180      11 log.go:172] (0x4002fc4780) (3) Data frame sent
I0828 14:20:38.991274      11 log.go:172] (0x40031d5550) Data frame received for 3
I0828 14:20:38.991359      11 log.go:172] (0x4002fc4780) (3) Data frame handling
I0828 14:20:38.991787      11 log.go:172] (0x40031d5550) Data frame received for 1
I0828 14:20:38.991862      11 log.go:172] (0x4000efe0a0) (1) Data frame handling
I0828 14:20:38.991935      11 log.go:172] (0x4000efe0a0) (1) Data frame sent
I0828 14:20:38.992021      11 log.go:172] (0x40031d5550) (0x4000efe0a0) Stream removed, broadcasting: 1
I0828 14:20:38.992115      11 log.go:172] (0x40031d5550) Go away received
I0828 14:20:38.992338      11 log.go:172] (0x40031d5550) (0x4000efe0a0) Stream removed, broadcasting: 1
I0828 14:20:38.992427      11 log.go:172] (0x40031d5550) (0x4002fc4780) Stream removed, broadcasting: 3
I0828 14:20:38.992510      11 log.go:172] (0x40031d5550) (0x4000efe280) Stream removed, broadcasting: 5
Aug 28 14:20:38.992: INFO: Exec stderr: ""
Aug 28 14:20:38.992: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2166 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:20:38.992: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:20:39.037277      11 log.go:172] (0x40033f0fd0) (0x4000e7a140) Create stream
I0828 14:20:39.037413      11 log.go:172] (0x40033f0fd0) (0x4000e7a140) Stream added, broadcasting: 1
I0828 14:20:39.040532      11 log.go:172] (0x40033f0fd0) Reply frame received for 1
I0828 14:20:39.040709      11 log.go:172] (0x40033f0fd0) (0x4002056c80) Create stream
I0828 14:20:39.040946      11 log.go:172] (0x40033f0fd0) (0x4002056c80) Stream added, broadcasting: 3
I0828 14:20:39.042397      11 log.go:172] (0x40033f0fd0) Reply frame received for 3
I0828 14:20:39.042518      11 log.go:172] (0x40033f0fd0) (0x40025c1cc0) Create stream
I0828 14:20:39.042605      11 log.go:172] (0x40033f0fd0) (0x40025c1cc0) Stream added, broadcasting: 5
I0828 14:20:39.044164      11 log.go:172] (0x40033f0fd0) Reply frame received for 5
I0828 14:20:39.110459      11 log.go:172] (0x40033f0fd0) Data frame received for 3
I0828 14:20:39.110581      11 log.go:172] (0x4002056c80) (3) Data frame handling
I0828 14:20:39.110680      11 log.go:172] (0x4002056c80) (3) Data frame sent
I0828 14:20:39.110751      11 log.go:172] (0x40033f0fd0) Data frame received for 3
I0828 14:20:39.110822      11 log.go:172] (0x4002056c80) (3) Data frame handling
I0828 14:20:39.110933      11 log.go:172] (0x40033f0fd0) Data frame received for 5
I0828 14:20:39.111046      11 log.go:172] (0x40025c1cc0) (5) Data frame handling
I0828 14:20:39.111366      11 log.go:172] (0x40033f0fd0) Data frame received for 1
I0828 14:20:39.111437      11 log.go:172] (0x4000e7a140) (1) Data frame handling
I0828 14:20:39.111510      11 log.go:172] (0x4000e7a140) (1) Data frame sent
I0828 14:20:39.111828      11 log.go:172] (0x40033f0fd0) (0x4000e7a140) Stream removed, broadcasting: 1
I0828 14:20:39.112146      11 log.go:172] (0x40033f0fd0) (0x4000e7a140) Stream removed, broadcasting: 1
I0828 14:20:39.112240      11 log.go:172] (0x40033f0fd0) (0x4002056c80) Stream removed, broadcasting: 3
I0828 14:20:39.112312      11 log.go:172] (0x40033f0fd0) (0x40025c1cc0) Stream removed, broadcasting: 5
Aug 28 14:20:39.112: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:20:39.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0828 14:20:39.112919      11 log.go:172] (0x40033f0fd0) Go away received
STEP: Destroying namespace "e2e-kubelet-etc-hosts-2166" for this suite.

• [SLOW TEST:29.374 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2889,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:20:39.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:20:39.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed" in namespace "projected-9215" to be "Succeeded or Failed"
Aug 28 14:20:40.001: INFO: Pod "downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed": Phase="Pending", Reason="", readiness=false. Elapsed: 53.180469ms
Aug 28 14:20:42.673: INFO: Pod "downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.725439438s
Aug 28 14:20:45.618: INFO: Pod "downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed": Phase="Pending", Reason="", readiness=false. Elapsed: 5.670681723s
Aug 28 14:20:47.623: INFO: Pod "downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed": Phase="Running", Reason="", readiness=true. Elapsed: 7.675542282s
Aug 28 14:20:49.628: INFO: Pod "downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.680824751s
STEP: Saw pod success
Aug 28 14:20:49.628: INFO: Pod "downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed" satisfied condition "Succeeded or Failed"
Aug 28 14:20:49.631: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed container client-container: 
STEP: delete the pod
Aug 28 14:20:50.306: INFO: Waiting for pod downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed to disappear
Aug 28 14:20:50.308: INFO: Pod downwardapi-volume-b9c061e7-29a5-4b2a-b962-faad8b256bed no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:20:50.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9215" for this suite.

• [SLOW TEST:11.025 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2917,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:20:50.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 28 14:20:51.519: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Aug 28 14:20:55.286: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 28 14:21:00.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221254, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:21:02.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221254, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:21:04.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221254, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:21:06.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221254, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:21:09.042: INFO: Waited 841.394429ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:21:10.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2493" for this suite.

• [SLOW TEST:20.634 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":175,"skipped":2951,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:21:11.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-7fafbc03-4ad6-49cb-97fc-20fb922f6859
STEP: Creating a pod to test consume configMaps
Aug 28 14:21:11.848: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897" in namespace "projected-1045" to be "Succeeded or Failed"
Aug 28 14:21:12.134: INFO: Pod "pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897": Phase="Pending", Reason="", readiness=false. Elapsed: 284.996889ms
Aug 28 14:21:14.139: INFO: Pod "pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290834046s
Aug 28 14:21:16.145: INFO: Pod "pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296884456s
Aug 28 14:21:18.262: INFO: Pod "pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413290278s
Aug 28 14:21:20.344: INFO: Pod "pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49498607s
Aug 28 14:21:22.351: INFO: Pod "pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.502550991s
STEP: Saw pod success
Aug 28 14:21:22.351: INFO: Pod "pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897" satisfied condition "Succeeded or Failed"
Aug 28 14:21:22.355: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 28 14:21:22.950: INFO: Waiting for pod pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897 to disappear
Aug 28 14:21:23.055: INFO: Pod pod-projected-configmaps-f7be9e53-4001-4f63-ac36-d9cdd3bdc897 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:21:23.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1045" for this suite.

• [SLOW TEST:11.856 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2955,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:21:23.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:21:23.299: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-148a7b26-a1af-433b-bb8f-0708e38045f9" in namespace "security-context-test-858" to be "Succeeded or Failed"
Aug 28 14:21:23.357: INFO: Pod "busybox-privileged-false-148a7b26-a1af-433b-bb8f-0708e38045f9": Phase="Pending", Reason="", readiness=false. Elapsed: 57.229329ms
Aug 28 14:21:25.405: INFO: Pod "busybox-privileged-false-148a7b26-a1af-433b-bb8f-0708e38045f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10542007s
Aug 28 14:21:27.411: INFO: Pod "busybox-privileged-false-148a7b26-a1af-433b-bb8f-0708e38045f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111376607s
Aug 28 14:21:29.422: INFO: Pod "busybox-privileged-false-148a7b26-a1af-433b-bb8f-0708e38045f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12262872s
Aug 28 14:21:31.459: INFO: Pod "busybox-privileged-false-148a7b26-a1af-433b-bb8f-0708e38045f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159264437s
Aug 28 14:21:31.459: INFO: Pod "busybox-privileged-false-148a7b26-a1af-433b-bb8f-0708e38045f9" satisfied condition "Succeeded or Failed"
Aug 28 14:21:32.018: INFO: Got logs for pod "busybox-privileged-false-148a7b26-a1af-433b-bb8f-0708e38045f9": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:21:32.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-858" for this suite.

• [SLOW TEST:8.951 seconds]
[k8s.io] Security Context
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with privileged
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":2969,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:21:32.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-l75bb in namespace proxy-7682
I0828 14:21:33.436461      11 runners.go:190] Created replication controller with name: proxy-service-l75bb, namespace: proxy-7682, replica count: 1
I0828 14:21:34.487794      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:21:35.488417      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:21:36.489207      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:21:37.489956      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:21:38.490647      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:21:39.491157      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:21:40.491932      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0828 14:21:41.492684      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0828 14:21:42.493413      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0828 14:21:43.494220      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0828 14:21:44.494984      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0828 14:21:45.495627      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0828 14:21:46.496196      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0828 14:21:47.497051      11 runners.go:190] proxy-service-l75bb Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 28 14:21:47.794: INFO: setup took 15.316929679s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 28 14:21:47.806: INFO: (0) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 10.841326ms)
Aug 28 14:21:47.807: INFO: (0) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 11.338932ms)
Aug 28 14:21:47.807: INFO: (0) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 12.065398ms)
Aug 28 14:21:47.807: INFO: (0) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 12.33686ms)
Aug 28 14:21:47.808: INFO: (0) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 12.525422ms)
Aug 28 14:21:47.808: INFO: (0) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 12.548506ms)
Aug 28 14:21:47.808: INFO: (0) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 12.16132ms)
Aug 28 14:21:47.808: INFO: (0) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 12.409974ms)
Aug 28 14:21:47.808: INFO: (0) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 13.322568ms)
Aug 28 14:21:47.808: INFO: (0) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 12.656086ms)
Aug 28 14:21:47.812: INFO: (0) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 17.169822ms)
Aug 28 14:21:47.814: INFO: (0) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 18.726716ms)
Aug 28 14:21:47.814: INFO: (0) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 19.008858ms)
Aug 28 14:21:47.814: INFO: (0) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 19.276519ms)
Aug 28 14:21:47.814: INFO: (0) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 19.351025ms)
Aug 28 14:21:47.814: INFO: (0) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: ... (200; 4.463986ms)
Aug 28 14:21:47.820: INFO: (1) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 4.912659ms)
Aug 28 14:21:47.822: INFO: (1) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 7.003087ms)
Aug 28 14:21:47.822: INFO: (1) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 7.143723ms)
Aug 28 14:21:47.822: INFO: (1) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 7.530491ms)
Aug 28 14:21:47.824: INFO: (1) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 8.946708ms)
Aug 28 14:21:47.824: INFO: (1) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 9.030626ms)
Aug 28 14:21:47.825: INFO: (1) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 10.005665ms)
Aug 28 14:21:47.825: INFO: (1) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 10.434845ms)
Aug 28 14:21:47.826: INFO: (1) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 11.060532ms)
Aug 28 14:21:47.826: INFO: (1) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 10.567036ms)
Aug 28 14:21:47.826: INFO: (1) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 10.78047ms)
Aug 28 14:21:47.827: INFO: (1) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 11.29794ms)
Aug 28 14:21:47.827: INFO: (1) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 11.69084ms)
Aug 28 14:21:47.827: INFO: (1) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test (200; 12.872499ms)
Aug 28 14:21:47.834: INFO: (2) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 5.365949ms)
Aug 28 14:21:47.834: INFO: (2) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 5.658334ms)
Aug 28 14:21:47.834: INFO: (2) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 5.797295ms)
Aug 28 14:21:47.834: INFO: (2) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 6.264646ms)
Aug 28 14:21:47.835: INFO: (2) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 6.682353ms)
Aug 28 14:21:47.835: INFO: (2) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 6.888913ms)
Aug 28 14:21:47.835: INFO: (2) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 7.086933ms)
Aug 28 14:21:47.835: INFO: (2) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.732701ms)
Aug 28 14:21:47.835: INFO: (2) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 7.24627ms)
Aug 28 14:21:47.836: INFO: (2) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: ... (200; 4.619965ms)
Aug 28 14:21:47.844: INFO: (3) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 5.949376ms)
Aug 28 14:21:47.844: INFO: (3) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test<... (200; 6.496596ms)
Aug 28 14:21:47.845: INFO: (3) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 6.851741ms)
Aug 28 14:21:47.845: INFO: (3) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 6.887722ms)
Aug 28 14:21:47.845: INFO: (3) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 7.112761ms)
Aug 28 14:21:47.845: INFO: (3) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 7.230444ms)
Aug 28 14:21:47.845: INFO: (3) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 7.302684ms)
Aug 28 14:21:47.845: INFO: (3) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 7.378639ms)
Aug 28 14:21:47.852: INFO: (4) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.021499ms)
Aug 28 14:21:47.852: INFO: (4) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 6.284702ms)
Aug 28 14:21:47.852: INFO: (4) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 6.751771ms)
Aug 28 14:21:47.853: INFO: (4) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 6.729548ms)
Aug 28 14:21:47.853: INFO: (4) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 7.049452ms)
Aug 28 14:21:47.853: INFO: (4) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 7.240096ms)
Aug 28 14:21:47.853: INFO: (4) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: ... (200; 7.461323ms)
Aug 28 14:21:47.853: INFO: (4) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 7.875038ms)
Aug 28 14:21:47.853: INFO: (4) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 7.878342ms)
Aug 28 14:21:47.853: INFO: (4) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 7.726687ms)
Aug 28 14:21:47.853: INFO: (4) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 7.651246ms)
Aug 28 14:21:47.854: INFO: (4) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 7.97804ms)
Aug 28 14:21:47.854: INFO: (4) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 7.979295ms)
Aug 28 14:21:47.854: INFO: (4) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 8.275724ms)
Aug 28 14:21:47.854: INFO: (4) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 8.331622ms)
Aug 28 14:21:47.857: INFO: (5) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 3.16266ms)
Aug 28 14:21:47.858: INFO: (5) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 3.760403ms)
Aug 28 14:21:47.858: INFO: (5) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 4.154442ms)
Aug 28 14:21:47.859: INFO: (5) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 4.808108ms)
Aug 28 14:21:47.859: INFO: (5) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 5.021248ms)
Aug 28 14:21:47.860: INFO: (5) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 5.308907ms)
Aug 28 14:21:47.860: INFO: (5) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 5.401535ms)
Aug 28 14:21:47.860: INFO: (5) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 6.016416ms)
Aug 28 14:21:47.860: INFO: (5) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 5.844437ms)
Aug 28 14:21:47.860: INFO: (5) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 5.925859ms)
Aug 28 14:21:47.860: INFO: (5) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 6.081148ms)
Aug 28 14:21:47.861: INFO: (5) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: ... (200; 6.704671ms)
Aug 28 14:21:47.861: INFO: (5) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 6.734233ms)
Aug 28 14:21:47.861: INFO: (5) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 6.783256ms)
Aug 28 14:21:47.861: INFO: (5) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 6.342337ms)
Aug 28 14:21:47.865: INFO: (6) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 4.238418ms)
Aug 28 14:21:47.867: INFO: (6) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 5.418489ms)
Aug 28 14:21:47.867: INFO: (6) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test (200; 6.757226ms)
Aug 28 14:21:47.868: INFO: (6) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.830442ms)
Aug 28 14:21:47.868: INFO: (6) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 7.186273ms)
Aug 28 14:21:47.869: INFO: (6) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 7.530418ms)
Aug 28 14:21:47.869: INFO: (6) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 7.850147ms)
Aug 28 14:21:47.869: INFO: (6) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 7.885384ms)
Aug 28 14:21:47.869: INFO: (6) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 8.058607ms)
Aug 28 14:21:47.869: INFO: (6) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 8.180335ms)
Aug 28 14:21:47.869: INFO: (6) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 8.149059ms)
Aug 28 14:21:47.870: INFO: (6) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 8.70707ms)
Aug 28 14:21:47.870: INFO: (6) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 8.811077ms)
Aug 28 14:21:47.870: INFO: (6) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 8.843669ms)
Aug 28 14:21:47.873: INFO: (7) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 2.962602ms)
Aug 28 14:21:47.875: INFO: (7) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 4.772667ms)
Aug 28 14:21:47.875: INFO: (7) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 5.102895ms)
Aug 28 14:21:47.876: INFO: (7) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 5.489208ms)
Aug 28 14:21:47.877: INFO: (7) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 6.31224ms)
Aug 28 14:21:47.877: INFO: (7) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test (200; 6.425957ms)
Aug 28 14:21:47.877: INFO: (7) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.602819ms)
Aug 28 14:21:47.877: INFO: (7) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.85228ms)
Aug 28 14:21:47.877: INFO: (7) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 6.778706ms)
Aug 28 14:21:47.878: INFO: (7) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 7.251446ms)
Aug 28 14:21:47.878: INFO: (7) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 6.898741ms)
Aug 28 14:21:47.878: INFO: (7) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 7.463111ms)
Aug 28 14:21:47.878: INFO: (7) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 7.1971ms)
Aug 28 14:21:47.878: INFO: (7) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 7.634598ms)
Aug 28 14:21:47.882: INFO: (8) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 4.388748ms)
Aug 28 14:21:47.883: INFO: (8) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: ... (200; 5.23497ms)
Aug 28 14:21:47.884: INFO: (8) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 5.153981ms)
Aug 28 14:21:47.886: INFO: (8) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 7.246256ms)
Aug 28 14:21:47.886: INFO: (8) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 7.118307ms)
Aug 28 14:21:47.886: INFO: (8) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 7.279701ms)
Aug 28 14:21:47.886: INFO: (8) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 7.923468ms)
Aug 28 14:21:47.886: INFO: (8) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 7.813637ms)
Aug 28 14:21:47.886: INFO: (8) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 8.170585ms)
Aug 28 14:21:47.886: INFO: (8) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 8.341278ms)
Aug 28 14:21:47.886: INFO: (8) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 8.06878ms)
Aug 28 14:21:47.887: INFO: (8) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 8.228424ms)
Aug 28 14:21:47.887: INFO: (8) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 8.06851ms)
Aug 28 14:21:47.893: INFO: (9) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.269121ms)
Aug 28 14:21:47.893: INFO: (9) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 6.130739ms)
Aug 28 14:21:47.893: INFO: (9) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 6.209693ms)
Aug 28 14:21:47.895: INFO: (9) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 8.019297ms)
Aug 28 14:21:47.895: INFO: (9) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 8.082802ms)
Aug 28 14:21:47.895: INFO: (9) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 7.854028ms)
Aug 28 14:21:47.897: INFO: (9) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 9.66544ms)
Aug 28 14:21:47.897: INFO: (9) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 9.68186ms)
Aug 28 14:21:47.897: INFO: (9) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 9.292197ms)
Aug 28 14:21:47.897: INFO: (9) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 9.647897ms)
Aug 28 14:21:47.897: INFO: (9) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 9.534658ms)
Aug 28 14:21:47.897: INFO: (9) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test<... (200; 4.654373ms)
Aug 28 14:21:47.903: INFO: (10) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 4.536075ms)
Aug 28 14:21:47.903: INFO: (10) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 5.049118ms)
Aug 28 14:21:47.903: INFO: (10) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 4.783231ms)
Aug 28 14:21:47.903: INFO: (10) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 4.874476ms)
Aug 28 14:21:47.903: INFO: (10) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 5.440684ms)
Aug 28 14:21:47.903: INFO: (10) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 5.874082ms)
Aug 28 14:21:47.904: INFO: (10) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 5.995458ms)
Aug 28 14:21:47.904: INFO: (10) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test<... (200; 2.490307ms)
Aug 28 14:21:47.911: INFO: (11) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 5.798945ms)
Aug 28 14:21:47.911: INFO: (11) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 5.976226ms)
Aug 28 14:21:47.911: INFO: (11) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test (200; 7.708275ms)
Aug 28 14:21:47.913: INFO: (11) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 7.879355ms)
Aug 28 14:21:47.913: INFO: (11) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 7.954292ms)
Aug 28 14:21:47.913: INFO: (11) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 7.709199ms)
Aug 28 14:21:47.918: INFO: (12) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 4.64665ms)
Aug 28 14:21:47.918: INFO: (12) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 4.781172ms)
Aug 28 14:21:47.918: INFO: (12) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 4.556104ms)
Aug 28 14:21:47.918: INFO: (12) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 4.984698ms)
Aug 28 14:21:47.918: INFO: (12) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 5.142755ms)
Aug 28 14:21:47.918: INFO: (12) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 5.23878ms)
Aug 28 14:21:47.918: INFO: (12) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 5.523481ms)
Aug 28 14:21:47.919: INFO: (12) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 5.57908ms)
Aug 28 14:21:47.919: INFO: (12) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 5.715071ms)
Aug 28 14:21:47.920: INFO: (12) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.071238ms)
Aug 28 14:21:47.920: INFO: (12) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 6.344991ms)
Aug 28 14:21:47.920: INFO: (12) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 6.508815ms)
Aug 28 14:21:47.920: INFO: (12) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 6.666213ms)
Aug 28 14:21:47.920: INFO: (12) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 6.785215ms)
Aug 28 14:21:47.920: INFO: (12) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test (200; 4.968727ms)
Aug 28 14:21:47.926: INFO: (13) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 5.170435ms)
Aug 28 14:21:47.926: INFO: (13) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 5.370778ms)
Aug 28 14:21:47.926: INFO: (13) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 5.251595ms)
Aug 28 14:21:47.926: INFO: (13) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test<... (200; 5.246037ms)
Aug 28 14:21:47.926: INFO: (13) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 5.727884ms)
Aug 28 14:21:47.926: INFO: (13) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 5.477671ms)
Aug 28 14:21:47.926: INFO: (13) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 5.792433ms)
Aug 28 14:21:47.927: INFO: (13) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 5.781218ms)
Aug 28 14:21:47.927: INFO: (13) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 5.951898ms)
Aug 28 14:21:47.927: INFO: (13) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 6.056626ms)
Aug 28 14:21:47.927: INFO: (13) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 6.22864ms)
Aug 28 14:21:47.927: INFO: (13) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.28818ms)
Aug 28 14:21:47.928: INFO: (13) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.94293ms)
Aug 28 14:21:47.931: INFO: (14) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 3.078993ms)
Aug 28 14:21:47.932: INFO: (14) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 4.257945ms)
Aug 28 14:21:47.932: INFO: (14) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test (200; 4.363881ms)
Aug 28 14:21:47.934: INFO: (14) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 5.599621ms)
Aug 28 14:21:47.934: INFO: (14) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 5.919326ms)
Aug 28 14:21:47.934: INFO: (14) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 6.23037ms)
Aug 28 14:21:47.934: INFO: (14) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 6.38106ms)
Aug 28 14:21:47.934: INFO: (14) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 6.542079ms)
Aug 28 14:21:47.934: INFO: (14) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.47167ms)
Aug 28 14:21:47.936: INFO: (14) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 7.590972ms)
Aug 28 14:21:47.936: INFO: (14) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 8.052018ms)
Aug 28 14:21:47.936: INFO: (14) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 8.141744ms)
Aug 28 14:21:47.936: INFO: (14) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 8.117086ms)
Aug 28 14:21:47.936: INFO: (14) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 8.639695ms)
Aug 28 14:21:47.936: INFO: (14) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 8.452277ms)
Aug 28 14:21:47.940: INFO: (15) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 3.736574ms)
Aug 28 14:21:47.941: INFO: (15) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 3.823523ms)
Aug 28 14:21:47.942: INFO: (15) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 5.561344ms)
Aug 28 14:21:47.942: INFO: (15) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 5.819511ms)
Aug 28 14:21:47.943: INFO: (15) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 5.934638ms)
Aug 28 14:21:47.943: INFO: (15) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 6.308217ms)
Aug 28 14:21:47.943: INFO: (15) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 6.219624ms)
Aug 28 14:21:47.943: INFO: (15) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 6.263042ms)
Aug 28 14:21:47.943: INFO: (15) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 6.554652ms)
Aug 28 14:21:47.944: INFO: (15) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test<... (200; 6.820937ms)
Aug 28 14:21:47.944: INFO: (15) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 6.917323ms)
Aug 28 14:21:47.944: INFO: (15) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 7.140811ms)
Aug 28 14:21:47.944: INFO: (15) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 6.998994ms)
Aug 28 14:21:47.944: INFO: (15) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 7.297935ms)
Aug 28 14:21:47.948: INFO: (16) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 3.677056ms)
Aug 28 14:21:47.949: INFO: (16) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: ... (200; 5.170121ms)
Aug 28 14:21:47.950: INFO: (16) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 5.821158ms)
Aug 28 14:21:47.950: INFO: (16) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 5.656228ms)
Aug 28 14:21:47.950: INFO: (16) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 6.12705ms)
Aug 28 14:21:47.950: INFO: (16) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 5.846207ms)
Aug 28 14:21:47.950: INFO: (16) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 6.229437ms)
Aug 28 14:21:47.950: INFO: (16) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 6.26034ms)
Aug 28 14:21:47.952: INFO: (16) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 7.396067ms)
Aug 28 14:21:47.952: INFO: (16) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 7.498612ms)
Aug 28 14:21:47.952: INFO: (16) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 7.391817ms)
Aug 28 14:21:47.952: INFO: (16) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 7.372964ms)
Aug 28 14:21:47.952: INFO: (16) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 7.375458ms)
Aug 28 14:21:47.952: INFO: (16) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 7.878754ms)
Aug 28 14:21:47.952: INFO: (16) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 7.598612ms)
Aug 28 14:21:47.958: INFO: (17) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test<... (200; 6.317767ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 6.455741ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 6.227514ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 6.51158ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 6.616198ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 6.794679ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 6.515512ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 6.653413ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 6.811666ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 6.740532ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 6.891642ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 6.893613ms)
Aug 28 14:21:47.959: INFO: (17) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 7.076617ms)
Aug 28 14:21:47.960: INFO: (17) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 6.975693ms)
Aug 28 14:21:47.963: INFO: (18) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 3.101482ms)
Aug 28 14:21:47.965: INFO: (18) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 4.024334ms)
Aug 28 14:21:47.965: INFO: (18) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 4.098456ms)
Aug 28 14:21:47.965: INFO: (18) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 4.961461ms)
Aug 28 14:21:47.965: INFO: (18) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 4.457763ms)
Aug 28 14:21:47.966: INFO: (18) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 5.299029ms)
Aug 28 14:21:47.968: INFO: (18) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 7.053683ms)
Aug 28 14:21:47.968: INFO: (18) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 7.054829ms)
Aug 28 14:21:47.968: INFO: (18) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn/proxy/: test (200; 7.004574ms)
Aug 28 14:21:47.968: INFO: (18) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 7.243219ms)
Aug 28 14:21:47.968: INFO: (18) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 7.202793ms)
Aug 28 14:21:47.968: INFO: (18) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname2/proxy/: bar (200; 7.436802ms)
Aug 28 14:21:47.968: INFO: (18) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 7.505754ms)
Aug 28 14:21:47.969: INFO: (18) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: test (200; 31.613224ms)
Aug 28 14:21:48.001: INFO: (19) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:162/proxy/: bar (200; 31.760288ms)
Aug 28 14:21:48.001: INFO: (19) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname2/proxy/: bar (200; 31.953685ms)
Aug 28 14:21:48.001: INFO: (19) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:1080/proxy/: ... (200; 31.935333ms)
Aug 28 14:21:48.001: INFO: (19) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:1080/proxy/: test<... (200; 32.147904ms)
Aug 28 14:21:48.011: INFO: (19) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:462/proxy/: tls qux (200; 41.594932ms)
Aug 28 14:21:48.011: INFO: (19) /api/v1/namespaces/proxy-7682/pods/http:proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 41.881529ms)
Aug 28 14:21:48.011: INFO: (19) /api/v1/namespaces/proxy-7682/pods/proxy-service-l75bb-r9xhn:160/proxy/: foo (200; 41.872546ms)
Aug 28 14:21:48.011: INFO: (19) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:460/proxy/: tls baz (200; 41.898964ms)
Aug 28 14:21:48.011: INFO: (19) /api/v1/namespaces/proxy-7682/services/http:proxy-service-l75bb:portname1/proxy/: foo (200; 42.168498ms)
Aug 28 14:21:48.012: INFO: (19) /api/v1/namespaces/proxy-7682/services/proxy-service-l75bb:portname1/proxy/: foo (200; 42.415049ms)
Aug 28 14:21:48.012: INFO: (19) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname1/proxy/: tls baz (200; 42.552744ms)
Aug 28 14:21:48.012: INFO: (19) /api/v1/namespaces/proxy-7682/services/https:proxy-service-l75bb:tlsportname2/proxy/: tls qux (200; 42.372188ms)
Aug 28 14:21:48.013: INFO: (19) /api/v1/namespaces/proxy-7682/pods/https:proxy-service-l75bb-r9xhn:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Aug 28 14:22:00.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config cluster-info'
Aug 28 14:22:01.599: INFO: stderr: ""
Aug 28 14:22:01.599: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:44383\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:44383/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:22:01.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-319" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":179,"skipped":2990,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:22:01.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9954.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9954.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 28 14:22:29.053: INFO: DNS probes using dns-9954/dns-test-013730c2-022a-4ff8-9e94-3426795e01b3 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:22:30.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9954" for this suite.

• [SLOW TEST:30.443 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":180,"skipped":2997,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:22:32.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:22:34.372: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1" in namespace "downward-api-6959" to be "Succeeded or Failed"
Aug 28 14:22:34.819: INFO: Pod "downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1": Phase="Pending", Reason="", readiness=false. Elapsed: 446.857006ms
Aug 28 14:22:37.084: INFO: Pod "downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.710982265s
Aug 28 14:22:39.125: INFO: Pod "downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.752640408s
Aug 28 14:22:41.281: INFO: Pod "downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.908779259s
Aug 28 14:22:43.291: INFO: Pod "downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.918426393s
Aug 28 14:22:45.388: INFO: Pod "downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.015410276s
Aug 28 14:22:47.531: INFO: Pod "downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.158218222s
STEP: Saw pod success
Aug 28 14:22:47.531: INFO: Pod "downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1" satisfied condition "Succeeded or Failed"
Aug 28 14:22:47.537: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1 container client-container: 
STEP: delete the pod
Aug 28 14:22:47.857: INFO: Waiting for pod downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1 to disappear
Aug 28 14:22:48.144: INFO: Pod downwardapi-volume-4d65dcb4-25fc-49c7-8596-5fc3fdfc68d1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:22:48.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6959" for this suite.

• [SLOW TEST:16.117 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3049,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:22:48.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:23:49.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4980" for this suite.

• [SLOW TEST:61.027 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3051,"failed":0}
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:23:49.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-6d2356c8-defb-4ecd-b271-e1ce2ea455e5 in namespace container-probe-8491
Aug 28 14:23:57.735: INFO: Started pod liveness-6d2356c8-defb-4ecd-b271-e1ce2ea455e5 in namespace container-probe-8491
STEP: checking the pod's current state and verifying that restartCount is present
Aug 28 14:23:57.740: INFO: Initial restart count of pod liveness-6d2356c8-defb-4ecd-b271-e1ce2ea455e5 is 0
Aug 28 14:24:14.161: INFO: Restart count of pod container-probe-8491/liveness-6d2356c8-defb-4ecd-b271-e1ce2ea455e5 is now 1 (16.420341912s elapsed)
Aug 28 14:24:36.271: INFO: Restart count of pod container-probe-8491/liveness-6d2356c8-defb-4ecd-b271-e1ce2ea455e5 is now 2 (38.530517197s elapsed)
Aug 28 14:24:52.606: INFO: Restart count of pod container-probe-8491/liveness-6d2356c8-defb-4ecd-b271-e1ce2ea455e5 is now 3 (54.864952096s elapsed)
Aug 28 14:25:12.850: INFO: Restart count of pod container-probe-8491/liveness-6d2356c8-defb-4ecd-b271-e1ce2ea455e5 is now 4 (1m15.109267838s elapsed)
Aug 28 14:26:25.529: INFO: Restart count of pod container-probe-8491/liveness-6d2356c8-defb-4ecd-b271-e1ce2ea455e5 is now 5 (2m27.788318816s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:26:26.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8491" for this suite.

• [SLOW TEST:157.616 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3051,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:26:26.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 28 14:26:41.336: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4326 pod-service-account-c010170d-19f5-4413-b96f-4bbd2bb23578 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 28 14:26:43.385: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4326 pod-service-account-c010170d-19f5-4413-b96f-4bbd2bb23578 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 28 14:26:45.042: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4326 pod-service-account-c010170d-19f5-4413-b96f-4bbd2bb23578 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:26:46.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4326" for this suite.

• [SLOW TEST:19.687 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":184,"skipped":3073,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:26:46.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 28 14:26:50.450: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:26:51.005: INFO: Number of nodes with available pods: 0
Aug 28 14:26:51.005: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:26:52.229: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:26:52.245: INFO: Number of nodes with available pods: 0
Aug 28 14:26:52.245: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:26:53.188: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:26:53.468: INFO: Number of nodes with available pods: 0
Aug 28 14:26:53.469: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:26:54.256: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:26:54.823: INFO: Number of nodes with available pods: 0
Aug 28 14:26:54.824: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:26:55.099: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:26:55.867: INFO: Number of nodes with available pods: 0
Aug 28 14:26:55.867: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:26:56.194: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:26:56.666: INFO: Number of nodes with available pods: 0
Aug 28 14:26:56.666: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:26:57.143: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:26:57.622: INFO: Number of nodes with available pods: 0
Aug 28 14:26:57.622: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:26:58.490: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:26:58.732: INFO: Number of nodes with available pods: 0
Aug 28 14:26:58.732: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:26:59.278: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:26:59.654: INFO: Number of nodes with available pods: 0
Aug 28 14:26:59.654: INFO: Node kali-worker is running more than one daemon pod
Aug 28 14:27:00.014: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:00.021: INFO: Number of nodes with available pods: 1
Aug 28 14:27:00.021: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:01.083: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:01.103: INFO: Number of nodes with available pods: 2
Aug 28 14:27:01.103: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 28 14:27:01.175: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:01.204: INFO: Number of nodes with available pods: 1
Aug 28 14:27:01.204: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:02.215: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:02.220: INFO: Number of nodes with available pods: 1
Aug 28 14:27:02.220: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:03.614: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:03.836: INFO: Number of nodes with available pods: 1
Aug 28 14:27:03.837: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:04.602: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:04.895: INFO: Number of nodes with available pods: 1
Aug 28 14:27:04.895: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:05.213: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:05.220: INFO: Number of nodes with available pods: 1
Aug 28 14:27:05.220: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:07.058: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:07.395: INFO: Number of nodes with available pods: 1
Aug 28 14:27:07.395: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:08.726: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:08.794: INFO: Number of nodes with available pods: 1
Aug 28 14:27:08.794: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:09.214: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:09.220: INFO: Number of nodes with available pods: 1
Aug 28 14:27:09.220: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:10.464: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:10.610: INFO: Number of nodes with available pods: 1
Aug 28 14:27:10.610: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:11.213: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:11.220: INFO: Number of nodes with available pods: 1
Aug 28 14:27:11.220: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:12.216: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:12.223: INFO: Number of nodes with available pods: 1
Aug 28 14:27:12.223: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:13.373: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:13.379: INFO: Number of nodes with available pods: 1
Aug 28 14:27:13.379: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:14.328: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:14.335: INFO: Number of nodes with available pods: 1
Aug 28 14:27:14.335: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:15.215: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:15.221: INFO: Number of nodes with available pods: 1
Aug 28 14:27:15.222: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:16.214: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:16.222: INFO: Number of nodes with available pods: 1
Aug 28 14:27:16.222: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:17.216: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:17.224: INFO: Number of nodes with available pods: 1
Aug 28 14:27:17.224: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:18.214: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:18.219: INFO: Number of nodes with available pods: 1
Aug 28 14:27:18.219: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:19.326: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:19.331: INFO: Number of nodes with available pods: 1
Aug 28 14:27:19.331: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:20.875: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:20.881: INFO: Number of nodes with available pods: 1
Aug 28 14:27:20.881: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:21.309: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:21.318: INFO: Number of nodes with available pods: 1
Aug 28 14:27:21.318: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:22.712: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:22.719: INFO: Number of nodes with available pods: 1
Aug 28 14:27:22.719: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:27:23.235: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 28 14:27:23.290: INFO: Number of nodes with available pods: 2
Aug 28 14:27:23.290: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-789, will wait for the garbage collector to delete the pods
Aug 28 14:27:23.552: INFO: Deleting DaemonSet.extensions daemon-set took: 34.843061ms
Aug 28 14:27:24.155: INFO: Terminating DaemonSet.extensions daemon-set pods took: 602.339732ms
Aug 28 14:27:37.878: INFO: Number of nodes with available pods: 0
Aug 28 14:27:37.879: INFO: Number of running nodes: 0, number of available pods: 0
Aug 28 14:27:37.883: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-789/daemonsets","resourceVersion":"1774023"},"items":null}

Aug 28 14:27:37.886: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-789/pods","resourceVersion":"1774023"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:27:37.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-789" for this suite.

• [SLOW TEST:51.390 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":185,"skipped":3202,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:27:37.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Aug 28 14:27:40.105: INFO: created pod pod-service-account-defaultsa
Aug 28 14:27:40.106: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 28 14:27:40.127: INFO: created pod pod-service-account-mountsa
Aug 28 14:27:40.128: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 28 14:27:40.227: INFO: created pod pod-service-account-nomountsa
Aug 28 14:27:40.227: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 28 14:27:40.266: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 28 14:27:40.266: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 28 14:27:40.659: INFO: created pod pod-service-account-mountsa-mountspec
Aug 28 14:27:40.659: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 28 14:27:40.977: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 28 14:27:40.977: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 28 14:27:41.018: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 28 14:27:41.019: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 28 14:27:41.583: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 28 14:27:41.584: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 28 14:27:41.648: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 28 14:27:41.648: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:27:41.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9535" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":186,"skipped":3242,"failed":0}
SS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:27:42.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:27:47.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6637" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":187,"skipped":3244,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:27:47.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 28 14:27:49.858: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:28:18.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-324" for this suite.

• [SLOW TEST:30.388 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3268,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:28:18.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:28:24.258: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:28:27.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221704, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221704, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221705, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221703, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:28:30.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221704, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221704, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221705, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221703, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:28:31.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221704, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221704, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221705, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221703, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:28:35.309: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:28:41.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2160" for this suite.
STEP: Destroying namespace "webhook-2160-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:26.862 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":189,"skipped":3271,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:28:45.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:28:47.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9704" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":190,"skipped":3288,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:28:47.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-d1117fa0-07bb-427f-9a71-e0638cea28f3
STEP: Creating a pod to test consume configMaps
Aug 28 14:28:48.386: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106" in namespace "projected-6526" to be "Succeeded or Failed"
Aug 28 14:28:48.643: INFO: Pod "pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106": Phase="Pending", Reason="", readiness=false. Elapsed: 257.011273ms
Aug 28 14:28:52.019: INFO: Pod "pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106": Phase="Pending", Reason="", readiness=false. Elapsed: 3.632781209s
Aug 28 14:28:54.026: INFO: Pod "pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106": Phase="Pending", Reason="", readiness=false. Elapsed: 5.639928212s
Aug 28 14:28:56.184: INFO: Pod "pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106": Phase="Pending", Reason="", readiness=false. Elapsed: 7.797965498s
Aug 28 14:28:58.289: INFO: Pod "pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106": Phase="Pending", Reason="", readiness=false. Elapsed: 9.903098194s
Aug 28 14:29:01.348: INFO: Pod "pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106": Phase="Pending", Reason="", readiness=false. Elapsed: 12.962007706s
Aug 28 14:29:03.376: INFO: Pod "pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.98966248s
STEP: Saw pod success
Aug 28 14:29:03.376: INFO: Pod "pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106" satisfied condition "Succeeded or Failed"
Aug 28 14:29:03.756: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 28 14:29:04.743: INFO: Waiting for pod pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106 to disappear
Aug 28 14:29:04.826: INFO: Pod pod-projected-configmaps-a9674470-063d-4d0d-8ec4-dc505f746106 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:29:04.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6526" for this suite.

• [SLOW TEST:17.679 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3304,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:29:04.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-knbn
STEP: Creating a pod to test atomic-volume-subpath
Aug 28 14:29:07.812: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-knbn" in namespace "subpath-5463" to be "Succeeded or Failed"
Aug 28 14:29:07.884: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Pending", Reason="", readiness=false. Elapsed: 71.473287ms
Aug 28 14:29:10.414: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.60192487s
Aug 28 14:29:12.931: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Pending", Reason="", readiness=false. Elapsed: 5.118429333s
Aug 28 14:29:15.139: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Pending", Reason="", readiness=false. Elapsed: 7.326516762s
Aug 28 14:29:17.144: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Pending", Reason="", readiness=false. Elapsed: 9.332015829s
Aug 28 14:29:19.152: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Pending", Reason="", readiness=false. Elapsed: 11.339458114s
Aug 28 14:29:21.635: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 13.82249241s
Aug 28 14:29:23.642: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 15.829914542s
Aug 28 14:29:25.703: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 17.891205095s
Aug 28 14:29:27.823: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 20.010750219s
Aug 28 14:29:29.831: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 22.018551575s
Aug 28 14:29:32.117: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 24.30513786s
Aug 28 14:29:34.125: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 26.313358421s
Aug 28 14:29:36.134: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 28.321633497s
Aug 28 14:29:38.142: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 30.329789485s
Aug 28 14:29:40.205: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Running", Reason="", readiness=true. Elapsed: 32.392739166s
Aug 28 14:29:42.488: INFO: Pod "pod-subpath-test-downwardapi-knbn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.676234222s
STEP: Saw pod success
Aug 28 14:29:42.489: INFO: Pod "pod-subpath-test-downwardapi-knbn" satisfied condition "Succeeded or Failed"
Aug 28 14:29:42.494: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-knbn container test-container-subpath-downwardapi-knbn: 
STEP: delete the pod
Aug 28 14:29:42.948: INFO: Waiting for pod pod-subpath-test-downwardapi-knbn to disappear
Aug 28 14:29:43.007: INFO: Pod pod-subpath-test-downwardapi-knbn no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-knbn
Aug 28 14:29:43.007: INFO: Deleting pod "pod-subpath-test-downwardapi-knbn" in namespace "subpath-5463"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:29:43.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5463" for this suite.

• [SLOW TEST:38.045 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":192,"skipped":3310,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:29:43.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:29:47.412: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:29:49.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221787, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221787, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221788, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221786, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:29:51.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221787, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221787, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221788, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221786, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:29:54.928: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:29:55.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-780" for this suite.
STEP: Destroying namespace "webhook-780-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.217 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":193,"skipped":3362,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:29:57.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:30:14.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9902" for this suite.

• [SLOW TEST:16.952 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":194,"skipped":3377,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:30:14.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4659
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4659
STEP: creating replication controller externalsvc in namespace services-4659
I0828 14:30:14.525843      11 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4659, replica count: 2
I0828 14:30:17.577298      11 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:30:20.577961      11 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:30:23.578530      11 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 28 14:30:23.613: INFO: Creating new exec pod
Aug 28 14:30:27.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-4659 execpod2bsp4 -- /bin/sh -x -c nslookup clusterip-service'
Aug 28 14:30:36.559: INFO: stderr: "I0828 14:30:36.426132    3351 log.go:172] (0x400003adc0) (0x4000aa8280) Create stream\nI0828 14:30:36.430696    3351 log.go:172] (0x400003adc0) (0x4000aa8280) Stream added, broadcasting: 1\nI0828 14:30:36.446833    3351 log.go:172] (0x400003adc0) Reply frame received for 1\nI0828 14:30:36.447780    3351 log.go:172] (0x400003adc0) (0x4000a02000) Create stream\nI0828 14:30:36.447876    3351 log.go:172] (0x400003adc0) (0x4000a02000) Stream added, broadcasting: 3\nI0828 14:30:36.450261    3351 log.go:172] (0x400003adc0) Reply frame received for 3\nI0828 14:30:36.450766    3351 log.go:172] (0x400003adc0) (0x4000aa8320) Create stream\nI0828 14:30:36.450926    3351 log.go:172] (0x400003adc0) (0x4000aa8320) Stream added, broadcasting: 5\nI0828 14:30:36.452634    3351 log.go:172] (0x400003adc0) Reply frame received for 5\nI0828 14:30:36.524440    3351 log.go:172] (0x400003adc0) Data frame received for 5\nI0828 14:30:36.524678    3351 log.go:172] (0x4000aa8320) (5) Data frame handling\nI0828 14:30:36.525226    3351 log.go:172] (0x4000aa8320) (5) Data frame sent\n+ nslookup clusterip-service\nI0828 14:30:36.531432    3351 log.go:172] (0x400003adc0) Data frame received for 3\nI0828 14:30:36.531549    3351 log.go:172] (0x4000a02000) (3) Data frame handling\nI0828 14:30:36.531664    3351 log.go:172] (0x4000a02000) (3) Data frame sent\nI0828 14:30:36.532172    3351 log.go:172] (0x400003adc0) Data frame received for 3\nI0828 14:30:36.532268    3351 log.go:172] (0x4000a02000) (3) Data frame handling\nI0828 14:30:36.532379    3351 log.go:172] (0x4000a02000) (3) Data frame sent\nI0828 14:30:36.532487    3351 log.go:172] (0x400003adc0) Data frame received for 3\nI0828 14:30:36.532568    3351 log.go:172] (0x4000a02000) (3) Data frame handling\nI0828 14:30:36.533053    3351 log.go:172] (0x400003adc0) Data frame received for 5\nI0828 14:30:36.533154    3351 log.go:172] (0x4000aa8320) (5) Data frame handling\nI0828 14:30:36.534388    3351 log.go:172] (0x400003adc0) Data frame received for 1\nI0828 14:30:36.534494    3351 log.go:172] (0x4000aa8280) (1) Data frame handling\nI0828 14:30:36.534581    3351 log.go:172] (0x4000aa8280) (1) Data frame sent\nI0828 14:30:36.535632    3351 log.go:172] (0x400003adc0) (0x4000aa8280) Stream removed, broadcasting: 1\nI0828 14:30:36.538619    3351 log.go:172] (0x400003adc0) Go away received\nI0828 14:30:36.539931    3351 log.go:172] (0x400003adc0) (0x4000aa8280) Stream removed, broadcasting: 1\nI0828 14:30:36.540209    3351 log.go:172] (0x400003adc0) (0x4000a02000) Stream removed, broadcasting: 3\nI0828 14:30:36.540428    3351 log.go:172] (0x400003adc0) (0x4000aa8320) Stream removed, broadcasting: 5\n"
Aug 28 14:30:36.560: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4659.svc.cluster.local\tcanonical name = externalsvc.services-4659.svc.cluster.local.\nName:\texternalsvc.services-4659.svc.cluster.local\nAddress: 10.96.152.167\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4659, will wait for the garbage collector to delete the pods
Aug 28 14:30:36.624: INFO: Deleting ReplicationController externalsvc took: 8.436524ms
Aug 28 14:30:37.125: INFO: Terminating ReplicationController externalsvc pods took: 500.994835ms
Aug 28 14:30:42.359: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:30:42.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4659" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:28.200 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":195,"skipped":3396,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:30:42.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 28 14:30:49.119: INFO: Successfully updated pod "annotationupdate2819f805-881b-4476-9854-701bec2e2439"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:30:51.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3946" for this suite.

• [SLOW TEST:8.775 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3407,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:30:51.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:31:00.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-769" for this suite.

• [SLOW TEST:9.501 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":197,"skipped":3409,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:31:00.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-1560/configmap-test-14a25e5f-8c98-4012-86ab-27d39197f73e
STEP: Creating a pod to test consume configMaps
Aug 28 14:31:01.670: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc112bc9-b072-4cdc-bbd9-29212ecd2694" in namespace "configmap-1560" to be "Succeeded or Failed"
Aug 28 14:31:02.309: INFO: Pod "pod-configmaps-cc112bc9-b072-4cdc-bbd9-29212ecd2694": Phase="Pending", Reason="", readiness=false. Elapsed: 638.559293ms
Aug 28 14:31:04.585: INFO: Pod "pod-configmaps-cc112bc9-b072-4cdc-bbd9-29212ecd2694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.914784295s
Aug 28 14:31:06.620: INFO: Pod "pod-configmaps-cc112bc9-b072-4cdc-bbd9-29212ecd2694": Phase="Pending", Reason="", readiness=false. Elapsed: 4.949982682s
Aug 28 14:31:08.878: INFO: Pod "pod-configmaps-cc112bc9-b072-4cdc-bbd9-29212ecd2694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.207751957s
STEP: Saw pod success
Aug 28 14:31:08.878: INFO: Pod "pod-configmaps-cc112bc9-b072-4cdc-bbd9-29212ecd2694" satisfied condition "Succeeded or Failed"
Aug 28 14:31:08.883: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-cc112bc9-b072-4cdc-bbd9-29212ecd2694 container env-test: 
STEP: delete the pod
Aug 28 14:31:09.515: INFO: Waiting for pod pod-configmaps-cc112bc9-b072-4cdc-bbd9-29212ecd2694 to disappear
Aug 28 14:31:09.782: INFO: Pod pod-configmaps-cc112bc9-b072-4cdc-bbd9-29212ecd2694 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:31:09.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1560" for this suite.

• [SLOW TEST:9.185 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3422,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:31:09.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-7732
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 28 14:31:10.462: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 28 14:31:10.956: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:31:13.069: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:31:15.184: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:31:17.095: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:31:18.962: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:31:20.964: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:31:22.975: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:31:25.264: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:31:27.067: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:31:29.075: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:31:30.963: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:31:32.962: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:31:34.962: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 28 14:31:34.970: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 28 14:31:36.976: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 28 14:31:38.976: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 28 14:31:45.040: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.49 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7732 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:31:45.040: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:31:45.100243      11 log.go:172] (0x4000ee4580) (0x400160c780) Create stream
I0828 14:31:45.100409      11 log.go:172] (0x4000ee4580) (0x400160c780) Stream added, broadcasting: 1
I0828 14:31:45.106311      11 log.go:172] (0x4000ee4580) Reply frame received for 1
I0828 14:31:45.106473      11 log.go:172] (0x4000ee4580) (0x40010052c0) Create stream
I0828 14:31:45.106543      11 log.go:172] (0x4000ee4580) (0x40010052c0) Stream added, broadcasting: 3
I0828 14:31:45.107826      11 log.go:172] (0x4000ee4580) Reply frame received for 3
I0828 14:31:45.107977      11 log.go:172] (0x4000ee4580) (0x4001b4b040) Create stream
I0828 14:31:45.108066      11 log.go:172] (0x4000ee4580) (0x4001b4b040) Stream added, broadcasting: 5
I0828 14:31:45.109252      11 log.go:172] (0x4000ee4580) Reply frame received for 5
I0828 14:31:46.161655      11 log.go:172] (0x4000ee4580) Data frame received for 3
I0828 14:31:46.161931      11 log.go:172] (0x40010052c0) (3) Data frame handling
I0828 14:31:46.162173      11 log.go:172] (0x40010052c0) (3) Data frame sent
I0828 14:31:46.162328      11 log.go:172] (0x4000ee4580) Data frame received for 3
I0828 14:31:46.162484      11 log.go:172] (0x4000ee4580) Data frame received for 5
I0828 14:31:46.162656      11 log.go:172] (0x4001b4b040) (5) Data frame handling
I0828 14:31:46.162748      11 log.go:172] (0x40010052c0) (3) Data frame handling
I0828 14:31:46.163655      11 log.go:172] (0x4000ee4580) Data frame received for 1
I0828 14:31:46.163818      11 log.go:172] (0x400160c780) (1) Data frame handling
I0828 14:31:46.163945      11 log.go:172] (0x400160c780) (1) Data frame sent
I0828 14:31:46.164160      11 log.go:172] (0x4000ee4580) (0x400160c780) Stream removed, broadcasting: 1
I0828 14:31:46.164326      11 log.go:172] (0x4000ee4580) Go away received
I0828 14:31:46.164916      11 log.go:172] (0x4000ee4580) (0x400160c780) Stream removed, broadcasting: 1
I0828 14:31:46.165123      11 log.go:172] (0x4000ee4580) (0x40010052c0) Stream removed, broadcasting: 3
I0828 14:31:46.165238      11 log.go:172] (0x4000ee4580) (0x4001b4b040) Stream removed, broadcasting: 5
Aug 28 14:31:46.165: INFO: Found all expected endpoints: [netserver-0]
Aug 28 14:31:46.171: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.37 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7732 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:31:46.171: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:31:46.227322      11 log.go:172] (0x4003148840) (0x4001b4b860) Create stream
I0828 14:31:46.227471      11 log.go:172] (0x4003148840) (0x4001b4b860) Stream added, broadcasting: 1
I0828 14:31:46.231603      11 log.go:172] (0x4003148840) Reply frame received for 1
I0828 14:31:46.231782      11 log.go:172] (0x4003148840) (0x4001005400) Create stream
I0828 14:31:46.231855      11 log.go:172] (0x4003148840) (0x4001005400) Stream added, broadcasting: 3
I0828 14:31:46.233275      11 log.go:172] (0x4003148840) Reply frame received for 3
I0828 14:31:46.233404      11 log.go:172] (0x4003148840) (0x40018e6dc0) Create stream
I0828 14:31:46.233489      11 log.go:172] (0x4003148840) (0x40018e6dc0) Stream added, broadcasting: 5
I0828 14:31:46.234847      11 log.go:172] (0x4003148840) Reply frame received for 5
I0828 14:31:47.302145      11 log.go:172] (0x4003148840) Data frame received for 3
I0828 14:31:47.302272      11 log.go:172] (0x4001005400) (3) Data frame handling
I0828 14:31:47.302357      11 log.go:172] (0x4003148840) Data frame received for 5
I0828 14:31:47.302440      11 log.go:172] (0x40018e6dc0) (5) Data frame handling
I0828 14:31:47.302523      11 log.go:172] (0x4001005400) (3) Data frame sent
I0828 14:31:47.302646      11 log.go:172] (0x4003148840) Data frame received for 3
I0828 14:31:47.302724      11 log.go:172] (0x4001005400) (3) Data frame handling
I0828 14:31:47.303495      11 log.go:172] (0x4003148840) Data frame received for 1
I0828 14:31:47.303558      11 log.go:172] (0x4001b4b860) (1) Data frame handling
I0828 14:31:47.303635      11 log.go:172] (0x4001b4b860) (1) Data frame sent
I0828 14:31:47.303741      11 log.go:172] (0x4003148840) (0x4001b4b860) Stream removed, broadcasting: 1
I0828 14:31:47.303835      11 log.go:172] (0x4003148840) Go away received
I0828 14:31:47.304017      11 log.go:172] (0x4003148840) (0x4001b4b860) Stream removed, broadcasting: 1
I0828 14:31:47.304102      11 log.go:172] (0x4003148840) (0x4001005400) Stream removed, broadcasting: 3
I0828 14:31:47.304180      11 log.go:172] (0x4003148840) (0x40018e6dc0) Stream removed, broadcasting: 5
Aug 28 14:31:47.304: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:31:47.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7732" for this suite.

• [SLOW TEST:37.432 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3449,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:31:47.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0828 14:31:48.469323      11 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 28 14:31:48.469: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:31:48.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2462" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":200,"skipped":3458,"failed":0}

------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:31:48.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8182.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8182.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8182.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8182.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 28 14:32:00.897: INFO: DNS probes using dns-8182/dns-test-9845fc8d-7ca8-4a89-abbe-4be94a95aaff succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:32:01.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8182" for this suite.

• [SLOW TEST:12.582 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":201,"skipped":3458,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:32:01.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-c7accc47-8bec-4086-a050-3851c03e2a4a
STEP: Creating a pod to test consume configMaps
Aug 28 14:32:01.748: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad" in namespace "projected-5661" to be "Succeeded or Failed"
Aug 28 14:32:01.832: INFO: Pod "pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad": Phase="Pending", Reason="", readiness=false. Elapsed: 83.072735ms
Aug 28 14:32:03.836: INFO: Pod "pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087399351s
Aug 28 14:32:06.123: INFO: Pod "pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374434254s
Aug 28 14:32:08.367: INFO: Pod "pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618690357s
Aug 28 14:32:10.372: INFO: Pod "pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.623234723s
STEP: Saw pod success
Aug 28 14:32:10.372: INFO: Pod "pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad" satisfied condition "Succeeded or Failed"
Aug 28 14:32:10.375: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad container projected-configmap-volume-test: 
STEP: delete the pod
Aug 28 14:32:10.501: INFO: Waiting for pod pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad to disappear
Aug 28 14:32:10.613: INFO: Pod pod-projected-configmaps-7cea19af-b1f5-4710-92dd-6519b8fdd5ad no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:32:10.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5661" for this suite.

• [SLOW TEST:9.566 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3486,"failed":0}
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:32:10.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:32:12.666: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 28 14:32:15.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:32:17.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:32:20.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221932, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:32:23.669: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:32:23.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:32:25.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2670" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:15.794 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":203,"skipped":3486,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:32:26.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:32:33.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6229" for this suite.

• [SLOW TEST:7.355 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":204,"skipped":3509,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:32:33.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:32:35.181: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"05db5ba8-4597-4db8-83bf-5771a2c1799a", Controller:(*bool)(0x4004245e5a), BlockOwnerDeletion:(*bool)(0x4004245e5b)}}
Aug 28 14:32:35.262: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"32890dbe-e4c5-48a8-ac1b-321ac8f90a7d", Controller:(*bool)(0x400421204a), BlockOwnerDeletion:(*bool)(0x400421204b)}}
Aug 28 14:32:35.327: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"685bf468-dff1-4cb8-8e09-f2128248b803", Controller:(*bool)(0x4003093be2), BlockOwnerDeletion:(*bool)(0x4003093be3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:32:40.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3238" for this suite.

• [SLOW TEST:6.725 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":205,"skipped":3544,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:32:40.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-3604
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 28 14:32:40.751: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 28 14:32:40.950: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:32:43.302: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:32:45.131: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:32:47.350: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:32:49.304: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:32:50.956: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:32:52.962: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:32:55.023: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:32:57.531: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:32:58.957: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:33:00.956: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:33:02.955: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:33:05.270: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 28 14:33:05.278: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 28 14:33:09.469: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.57:8080/dial?request=hostname&protocol=http&host=10.244.1.56&port=8080&tries=1'] Namespace:pod-network-test-3604 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:33:09.469: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:33:09.531928      11 log.go:172] (0x4000ee4630) (0x4000efed20) Create stream
I0828 14:33:09.532145      11 log.go:172] (0x4000ee4630) (0x4000efed20) Stream added, broadcasting: 1
I0828 14:33:09.535036      11 log.go:172] (0x4000ee4630) Reply frame received for 1
I0828 14:33:09.535146      11 log.go:172] (0x4000ee4630) (0x4000efedc0) Create stream
I0828 14:33:09.535193      11 log.go:172] (0x4000ee4630) (0x4000efedc0) Stream added, broadcasting: 3
I0828 14:33:09.536250      11 log.go:172] (0x4000ee4630) Reply frame received for 3
I0828 14:33:09.536385      11 log.go:172] (0x4000ee4630) (0x400153b5e0) Create stream
I0828 14:33:09.536455      11 log.go:172] (0x4000ee4630) (0x400153b5e0) Stream added, broadcasting: 5
I0828 14:33:09.537550      11 log.go:172] (0x4000ee4630) Reply frame received for 5
I0828 14:33:09.608330      11 log.go:172] (0x4000ee4630) Data frame received for 3
I0828 14:33:09.608496      11 log.go:172] (0x4000efedc0) (3) Data frame handling
I0828 14:33:09.608648      11 log.go:172] (0x4000ee4630) Data frame received for 5
I0828 14:33:09.608918      11 log.go:172] (0x400153b5e0) (5) Data frame handling
I0828 14:33:09.609042      11 log.go:172] (0x4000efedc0) (3) Data frame sent
I0828 14:33:09.609149      11 log.go:172] (0x4000ee4630) Data frame received for 3
I0828 14:33:09.609239      11 log.go:172] (0x4000efedc0) (3) Data frame handling
I0828 14:33:09.609596      11 log.go:172] (0x4000ee4630) Data frame received for 1
I0828 14:33:09.609679      11 log.go:172] (0x4000efed20) (1) Data frame handling
I0828 14:33:09.609778      11 log.go:172] (0x4000efed20) (1) Data frame sent
I0828 14:33:09.609874      11 log.go:172] (0x4000ee4630) (0x4000efed20) Stream removed, broadcasting: 1
I0828 14:33:09.610011      11 log.go:172] (0x4000ee4630) Go away received
I0828 14:33:09.610222      11 log.go:172] (0x4000ee4630) (0x4000efed20) Stream removed, broadcasting: 1
I0828 14:33:09.610308      11 log.go:172] (0x4000ee4630) (0x4000efedc0) Stream removed, broadcasting: 3
I0828 14:33:09.610376      11 log.go:172] (0x4000ee4630) (0x400153b5e0) Stream removed, broadcasting: 5
Aug 28 14:33:09.611: INFO: Waiting for responses: map[]
Aug 28 14:33:09.615: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.57:8080/dial?request=hostname&protocol=http&host=10.244.2.41&port=8080&tries=1'] Namespace:pod-network-test-3604 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:33:09.615: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:33:09.671900      11 log.go:172] (0x4000ee4e70) (0x4000950000) Create stream
I0828 14:33:09.672009      11 log.go:172] (0x4000ee4e70) (0x4000950000) Stream added, broadcasting: 1
I0828 14:33:09.674639      11 log.go:172] (0x4000ee4e70) Reply frame received for 1
I0828 14:33:09.674841      11 log.go:172] (0x4000ee4e70) (0x40017a1a40) Create stream
I0828 14:33:09.674954      11 log.go:172] (0x4000ee4e70) (0x40017a1a40) Stream added, broadcasting: 3
I0828 14:33:09.676510      11 log.go:172] (0x4000ee4e70) Reply frame received for 3
I0828 14:33:09.676643      11 log.go:172] (0x4000ee4e70) (0x40009505a0) Create stream
I0828 14:33:09.676707      11 log.go:172] (0x4000ee4e70) (0x40009505a0) Stream added, broadcasting: 5
I0828 14:33:09.677954      11 log.go:172] (0x4000ee4e70) Reply frame received for 5
I0828 14:33:09.743975      11 log.go:172] (0x4000ee4e70) Data frame received for 3
I0828 14:33:09.744182      11 log.go:172] (0x40017a1a40) (3) Data frame handling
I0828 14:33:09.744301      11 log.go:172] (0x40017a1a40) (3) Data frame sent
I0828 14:33:09.744384      11 log.go:172] (0x4000ee4e70) Data frame received for 3
I0828 14:33:09.744442      11 log.go:172] (0x40017a1a40) (3) Data frame handling
I0828 14:33:09.745101      11 log.go:172] (0x4000ee4e70) Data frame received for 5
I0828 14:33:09.745227      11 log.go:172] (0x40009505a0) (5) Data frame handling
I0828 14:33:09.745574      11 log.go:172] (0x4000ee4e70) Data frame received for 1
I0828 14:33:09.745680      11 log.go:172] (0x4000950000) (1) Data frame handling
I0828 14:33:09.745791      11 log.go:172] (0x4000950000) (1) Data frame sent
I0828 14:33:09.745893      11 log.go:172] (0x4000ee4e70) (0x4000950000) Stream removed, broadcasting: 1
I0828 14:33:09.746004      11 log.go:172] (0x4000ee4e70) Go away received
I0828 14:33:09.746145      11 log.go:172] (0x4000ee4e70) (0x4000950000) Stream removed, broadcasting: 1
I0828 14:33:09.746245      11 log.go:172] (0x4000ee4e70) (0x40017a1a40) Stream removed, broadcasting: 3
I0828 14:33:09.746307      11 log.go:172] (0x4000ee4e70) (0x40009505a0) Stream removed, broadcasting: 5
Aug 28 14:33:09.746: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:33:09.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3604" for this suite.

• [SLOW TEST:29.247 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3567,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:33:09.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:33:09.898: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 28 14:33:15.024: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 28 14:33:17.334: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 28 14:33:19.346: INFO: Creating deployment "test-rollover-deployment"
Aug 28 14:33:19.379: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 28 14:33:21.527: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 28 14:33:21.536: INFO: Ensure that both replica sets have 1 created replica
Aug 28 14:33:21.543: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 28 14:33:21.550: INFO: Updating deployment test-rollover-deployment
Aug 28 14:33:21.551: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 28 14:33:23.996: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 28 14:33:24.002: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 28 14:33:24.009: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 14:33:24.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222002, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:33:26.372: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 14:33:26.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222002, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:33:28.286: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 14:33:28.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222007, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:33:30.022: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 14:33:30.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222007, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:33:32.022: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 14:33:32.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222007, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:33:34.021: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 14:33:34.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222007, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:33:36.025: INFO: all replica sets need to contain the pod-template-hash label
Aug 28 14:33:36.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222007, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:33:38.073: INFO: 
Aug 28 14:33:38.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222017, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734221999, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:33:40.025: INFO: 
Aug 28 14:33:40.025: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 28 14:33:40.037: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-2841 /apis/apps/v1/namespaces/deployment-2841/deployments/test-rollover-deployment cce10e3c-77b9-4790-a305-7fdfa64671b3 1776026 2 2020-08-28 14:33:19 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-28 14:33:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-28 14:33:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003955168  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-28 14:33:19 +0000 UTC,LastTransitionTime:2020-08-28 14:33:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-08-28 14:33:38 +0000 UTC,LastTransitionTime:2020-08-28 14:33:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 28 14:33:40.045: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-2841 /apis/apps/v1/namespaces/deployment-2841/replicasets/test-rollover-deployment-84f7f6f64b fb4637ac-643a-457d-b056-b845d6b1339e 1776015 2 2020-08-28 14:33:21 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment cce10e3c-77b9-4790-a305-7fdfa64671b3 0x4004765437 0x4004765438}] []  [{kube-controller-manager Update apps/v1 2020-08-28 14:33:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 99 101 49 48 101 51 99 45 55 55 98 57 45 52 55 57 48 45 97 51 48 53 45 55 102 100 102 97 54 52 54 55 49 98 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40047654c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 28 14:33:40.045: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 28 14:33:40.046: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-2841 /apis/apps/v1/namespaces/deployment-2841/replicasets/test-rollover-controller 4f38a198-33b0-4d6c-b008-42a072f0f2fd 1776025 2 2020-08-28 14:33:09 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment cce10e3c-77b9-4790-a305-7fdfa64671b3 0x4004765227 0x4004765228}] []  [{e2e.test Update apps/v1 2020-08-28 14:33:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-28 14:33:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 99 101 49 48 101 51 99 45 55 55 98 57 45 52 55 57 48 45 97 51 48 53 45 55 102 100 102 97 54 52 54 55 49 98 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x40047652c8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 28 14:33:40.048: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-2841 /apis/apps/v1/namespaces/deployment-2841/replicasets/test-rollover-deployment-5686c4cfd5 bca8d252-365e-4dcb-9807-b5c7a403ff29 1775959 2 2020-08-28 14:33:19 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment cce10e3c-77b9-4790-a305-7fdfa64671b3 0x4004765337 0x4004765338}] []  [{kube-controller-manager Update apps/v1 2020-08-28 14:33:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 99 101 49 48 101 51 99 45 55 55 98 57 45 52 55 57 48 45 97 51 48 53 45 55 102 100 102 97 54 52 54 55 49 98 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40047653c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 28 14:33:40.057: INFO: Pod "test-rollover-deployment-84f7f6f64b-9lvx2" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-9lvx2 test-rollover-deployment-84f7f6f64b- deployment-2841 /api/v1/namespaces/deployment-2841/pods/test-rollover-deployment-84f7f6f64b-9lvx2 37f2e764-cf59-4c6d-ab0b-1c02a07fb10f 1775978 0 2020-08-28 14:33:22 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b fb4637ac-643a-457d-b056-b845d6b1339e 0x4004765a87 0x4004765a88}] []  [{kube-controller-manager Update v1 2020-08-28 14:33:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 98 52 54 51 55 97 99 45 54 52 51 97 45 52 53 55 100 45 98 48 53 54 45 98 56 52 53 100 54 98 49 51 51 57 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:33:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 52 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-csnff,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-csnff,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-csnff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:33:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:33:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.43,StartTime:2020-08-28 14:33:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:33:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://008a0d05157992e0006e10efbe18b2da0dd7acdb9384f2c7f245f40953aa9d4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:33:40.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2841" for this suite.

• [SLOW TEST:30.310 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":207,"skipped":3603,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:33:40.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:33:44.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7326" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3623,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:33:44.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:33:46.544: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:33:48.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222026, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222026, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222026, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222026, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:33:51.595: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:33:51.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3688" for this suite.
STEP: Destroying namespace "webhook-3688-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.523 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":209,"skipped":3641,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:33:51.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-7992
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 28 14:33:51.975: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 28 14:33:52.058: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:33:54.064: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:33:56.063: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:33:58.063: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:34:00.066: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:34:02.066: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:34:04.083: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:34:06.065: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 28 14:34:08.065: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 28 14:34:08.073: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 28 14:34:16.101: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.47:8080/dial?request=hostname&protocol=udp&host=10.244.1.60&port=8081&tries=1'] Namespace:pod-network-test-7992 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:34:16.102: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:34:16.162125      11 log.go:172] (0x400132c210) (0x40026b5860) Create stream
I0828 14:34:16.162244      11 log.go:172] (0x400132c210) (0x40026b5860) Stream added, broadcasting: 1
I0828 14:34:16.164850      11 log.go:172] (0x400132c210) Reply frame received for 1
I0828 14:34:16.164954      11 log.go:172] (0x400132c210) (0x4002056820) Create stream
I0828 14:34:16.165012      11 log.go:172] (0x400132c210) (0x4002056820) Stream added, broadcasting: 3
I0828 14:34:16.166542      11 log.go:172] (0x400132c210) Reply frame received for 3
I0828 14:34:16.166764      11 log.go:172] (0x400132c210) (0x40026b5a40) Create stream
I0828 14:34:16.166877      11 log.go:172] (0x400132c210) (0x40026b5a40) Stream added, broadcasting: 5
I0828 14:34:16.168522      11 log.go:172] (0x400132c210) Reply frame received for 5
I0828 14:34:16.240955      11 log.go:172] (0x400132c210) Data frame received for 3
I0828 14:34:16.241102      11 log.go:172] (0x4002056820) (3) Data frame handling
I0828 14:34:16.241182      11 log.go:172] (0x400132c210) Data frame received for 5
I0828 14:34:16.241256      11 log.go:172] (0x40026b5a40) (5) Data frame handling
I0828 14:34:16.241354      11 log.go:172] (0x4002056820) (3) Data frame sent
I0828 14:34:16.241424      11 log.go:172] (0x400132c210) Data frame received for 3
I0828 14:34:16.241468      11 log.go:172] (0x4002056820) (3) Data frame handling
I0828 14:34:16.242178      11 log.go:172] (0x400132c210) Data frame received for 1
I0828 14:34:16.242283      11 log.go:172] (0x40026b5860) (1) Data frame handling
I0828 14:34:16.242358      11 log.go:172] (0x40026b5860) (1) Data frame sent
I0828 14:34:16.242435      11 log.go:172] (0x400132c210) (0x40026b5860) Stream removed, broadcasting: 1
I0828 14:34:16.242529      11 log.go:172] (0x400132c210) Go away received
I0828 14:34:16.242752      11 log.go:172] (0x400132c210) (0x40026b5860) Stream removed, broadcasting: 1
I0828 14:34:16.242823      11 log.go:172] (0x400132c210) (0x4002056820) Stream removed, broadcasting: 3
I0828 14:34:16.242870      11 log.go:172] (0x400132c210) (0x40026b5a40) Stream removed, broadcasting: 5
Aug 28 14:34:16.243: INFO: Waiting for responses: map[]
Aug 28 14:34:16.246: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.47:8080/dial?request=hostname&protocol=udp&host=10.244.2.45&port=8081&tries=1'] Namespace:pod-network-test-7992 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:34:16.246: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:34:16.304425      11 log.go:172] (0x4000e271e0) (0x4002057a40) Create stream
I0828 14:34:16.304538      11 log.go:172] (0x4000e271e0) (0x4002057a40) Stream added, broadcasting: 1
I0828 14:34:16.310201      11 log.go:172] (0x4000e271e0) Reply frame received for 1
I0828 14:34:16.310382      11 log.go:172] (0x4000e271e0) (0x4000996320) Create stream
I0828 14:34:16.310451      11 log.go:172] (0x4000e271e0) (0x4000996320) Stream added, broadcasting: 3
I0828 14:34:16.312129      11 log.go:172] (0x4000e271e0) Reply frame received for 3
I0828 14:34:16.312294      11 log.go:172] (0x4000e271e0) (0x400129af00) Create stream
I0828 14:34:16.312370      11 log.go:172] (0x4000e271e0) (0x400129af00) Stream added, broadcasting: 5
I0828 14:34:16.313643      11 log.go:172] (0x4000e271e0) Reply frame received for 5
I0828 14:34:16.379576      11 log.go:172] (0x4000e271e0) Data frame received for 3
I0828 14:34:16.379718      11 log.go:172] (0x4000996320) (3) Data frame handling
I0828 14:34:16.379817      11 log.go:172] (0x4000e271e0) Data frame received for 5
I0828 14:34:16.379896      11 log.go:172] (0x400129af00) (5) Data frame handling
I0828 14:34:16.379997      11 log.go:172] (0x4000996320) (3) Data frame sent
I0828 14:34:16.380142      11 log.go:172] (0x4000e271e0) Data frame received for 3
I0828 14:34:16.380207      11 log.go:172] (0x4000996320) (3) Data frame handling
I0828 14:34:16.380869      11 log.go:172] (0x4000e271e0) Data frame received for 1
I0828 14:34:16.380937      11 log.go:172] (0x4002057a40) (1) Data frame handling
I0828 14:34:16.380996      11 log.go:172] (0x4002057a40) (1) Data frame sent
I0828 14:34:16.381067      11 log.go:172] (0x4000e271e0) (0x4002057a40) Stream removed, broadcasting: 1
I0828 14:34:16.381155      11 log.go:172] (0x4000e271e0) Go away received
I0828 14:34:16.381396      11 log.go:172] (0x4000e271e0) (0x4002057a40) Stream removed, broadcasting: 1
I0828 14:34:16.381487      11 log.go:172] (0x4000e271e0) (0x4000996320) Stream removed, broadcasting: 3
I0828 14:34:16.381564      11 log.go:172] (0x4000e271e0) (0x400129af00) Stream removed, broadcasting: 5
Aug 28 14:34:16.381: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:34:16.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7992" for this suite.

• [SLOW TEST:24.498 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3652,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:34:16.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:34:19.028: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:34:21.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222058, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:34:24.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222058, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:34:25.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222058, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:34:27.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222058, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:34:29.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222058, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:34:31.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222059, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222058, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:34:35.140: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:34:35.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3993-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:34:39.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3502" for this suite.
STEP: Destroying namespace "webhook-3502-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.606 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":211,"skipped":3653,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:34:40.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-361197b4-1eee-43ef-ab6b-4f4175ed45e1
STEP: Creating a pod to test consume secrets
Aug 28 14:34:43.740: INFO: Waiting up to 5m0s for pod "pod-secrets-af5676e3-6570-4720-8471-677877c3d03b" in namespace "secrets-2199" to be "Succeeded or Failed"
Aug 28 14:34:44.097: INFO: Pod "pod-secrets-af5676e3-6570-4720-8471-677877c3d03b": Phase="Pending", Reason="", readiness=false. Elapsed: 356.210522ms
Aug 28 14:34:46.143: INFO: Pod "pod-secrets-af5676e3-6570-4720-8471-677877c3d03b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402556896s
Aug 28 14:34:48.354: INFO: Pod "pod-secrets-af5676e3-6570-4720-8471-677877c3d03b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613093268s
Aug 28 14:34:50.421: INFO: Pod "pod-secrets-af5676e3-6570-4720-8471-677877c3d03b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.680883756s
STEP: Saw pod success
Aug 28 14:34:50.422: INFO: Pod "pod-secrets-af5676e3-6570-4720-8471-677877c3d03b" satisfied condition "Succeeded or Failed"
Aug 28 14:34:50.425: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-af5676e3-6570-4720-8471-677877c3d03b container secret-env-test: 
STEP: delete the pod
Aug 28 14:34:50.737: INFO: Waiting for pod pod-secrets-af5676e3-6570-4720-8471-677877c3d03b to disappear
Aug 28 14:34:50.822: INFO: Pod pod-secrets-af5676e3-6570-4720-8471-677877c3d03b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:34:50.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2199" for this suite.

• [SLOW TEST:9.929 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3658,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:34:50.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:34:51.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 28 14:35:10.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8675 create -f -'
Aug 28 14:35:25.385: INFO: stderr: ""
Aug 28 14:35:25.385: INFO: stdout: "e2e-test-crd-publish-openapi-7346-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 28 14:35:25.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8675 delete e2e-test-crd-publish-openapi-7346-crds test-foo'
Aug 28 14:35:26.647: INFO: stderr: ""
Aug 28 14:35:26.647: INFO: stdout: "e2e-test-crd-publish-openapi-7346-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 28 14:35:26.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8675 apply -f -'
Aug 28 14:35:28.542: INFO: stderr: ""
Aug 28 14:35:28.542: INFO: stdout: "e2e-test-crd-publish-openapi-7346-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 28 14:35:28.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8675 delete e2e-test-crd-publish-openapi-7346-crds test-foo'
Aug 28 14:35:29.807: INFO: stderr: ""
Aug 28 14:35:29.807: INFO: stdout: "e2e-test-crd-publish-openapi-7346-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 28 14:35:29.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8675 create -f -'
Aug 28 14:35:31.427: INFO: rc: 1
Aug 28 14:35:31.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8675 apply -f -'
Aug 28 14:35:33.167: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 28 14:35:33.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8675 create -f -'
Aug 28 14:35:34.947: INFO: rc: 1
Aug 28 14:35:34.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8675 apply -f -'
Aug 28 14:35:36.669: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 28 14:35:36.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7346-crds'
Aug 28 14:35:38.423: INFO: stderr: ""
Aug 28 14:35:38.423: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7346-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 28 14:35:38.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7346-crds.metadata'
Aug 28 14:35:40.538: INFO: stderr: ""
Aug 28 14:35:40.539: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7346-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 28 14:35:40.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7346-crds.spec'
Aug 28 14:35:42.400: INFO: stderr: ""
Aug 28 14:35:42.401: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7346-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 28 14:35:42.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7346-crds.spec.bars'
Aug 28 14:35:43.877: INFO: stderr: ""
Aug 28 14:35:43.877: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7346-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 28 14:35:43.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7346-crds.spec.bars2'
Aug 28 14:35:45.428: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:36:05.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8675" for this suite.

• [SLOW TEST:74.183 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":213,"skipped":3691,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:36:05.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug 28 14:36:05.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2441'
Aug 28 14:36:08.203: INFO: stderr: ""
Aug 28 14:36:08.203: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 28 14:36:08.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2441'
Aug 28 14:36:09.489: INFO: stderr: ""
Aug 28 14:36:09.489: INFO: stdout: "update-demo-nautilus-b6kvk update-demo-nautilus-w7nqw "
Aug 28 14:36:09.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6kvk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2441'
Aug 28 14:36:11.026: INFO: stderr: ""
Aug 28 14:36:11.027: INFO: stdout: ""
Aug 28 14:36:11.027: INFO: update-demo-nautilus-b6kvk is created but not running
Aug 28 14:36:16.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2441'
Aug 28 14:36:17.315: INFO: stderr: ""
Aug 28 14:36:17.316: INFO: stdout: "update-demo-nautilus-b6kvk update-demo-nautilus-w7nqw "
Aug 28 14:36:17.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6kvk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2441'
Aug 28 14:36:18.619: INFO: stderr: ""
Aug 28 14:36:18.620: INFO: stdout: "true"
Aug 28 14:36:18.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b6kvk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2441'
Aug 28 14:36:19.898: INFO: stderr: ""
Aug 28 14:36:19.899: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 28 14:36:19.899: INFO: validating pod update-demo-nautilus-b6kvk
Aug 28 14:36:19.906: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 28 14:36:19.907: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 28 14:36:19.907: INFO: update-demo-nautilus-b6kvk is verified up and running
Aug 28 14:36:19.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w7nqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2441'
Aug 28 14:36:21.172: INFO: stderr: ""
Aug 28 14:36:21.172: INFO: stdout: "true"
Aug 28 14:36:21.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w7nqw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2441'
Aug 28 14:36:22.573: INFO: stderr: ""
Aug 28 14:36:22.573: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 28 14:36:22.573: INFO: validating pod update-demo-nautilus-w7nqw
Aug 28 14:36:22.578: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 28 14:36:22.578: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 28 14:36:22.578: INFO: update-demo-nautilus-w7nqw is verified up and running
STEP: using delete to clean up resources
Aug 28 14:36:22.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2441'
Aug 28 14:36:24.748: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 28 14:36:24.748: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 28 14:36:24.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2441'
Aug 28 14:36:26.349: INFO: stderr: "No resources found in kubectl-2441 namespace.\n"
Aug 28 14:36:26.349: INFO: stdout: ""
Aug 28 14:36:26.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2441 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 28 14:36:27.774: INFO: stderr: ""
Aug 28 14:36:27.774: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:36:27.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2441" for this suite.

• [SLOW TEST:23.573 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":214,"skipped":3704,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:36:28.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:36:33.024: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:36:35.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222192, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:36:37.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222192, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:36:39.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222192, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:36:41.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222192, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:36:43.068: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222193, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222192, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:36:46.497: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:36:46.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7698-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:36:47.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6581" for this suite.
STEP: Destroying namespace "webhook-6581-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.089 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":215,"skipped":3706,"failed":0}
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:36:47.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:37:19.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9809" for this suite.
STEP: Destroying namespace "nsdeletetest-1380" for this suite.
Aug 28 14:37:19.746: INFO: Namespace nsdeletetest-1380 was already deleted
STEP: Destroying namespace "nsdeletetest-3988" for this suite.

• [SLOW TEST:31.970 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":216,"skipped":3706,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:37:19.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 28 14:37:31.178: INFO: Successfully updated pod "pod-update-activedeadlineseconds-266b75b7-f4a4-427a-a394-541550183c81"
Aug 28 14:37:31.179: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-266b75b7-f4a4-427a-a394-541550183c81" in namespace "pods-814" to be "terminated due to deadline exceeded"
Aug 28 14:37:31.217: INFO: Pod "pod-update-activedeadlineseconds-266b75b7-f4a4-427a-a394-541550183c81": Phase="Running", Reason="", readiness=true. Elapsed: 37.44797ms
Aug 28 14:37:33.229: INFO: Pod "pod-update-activedeadlineseconds-266b75b7-f4a4-427a-a394-541550183c81": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.049768743s
Aug 28 14:37:33.229: INFO: Pod "pod-update-activedeadlineseconds-266b75b7-f4a4-427a-a394-541550183c81" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:37:33.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-814" for this suite.

• [SLOW TEST:13.515 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3707,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:37:33.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:37:35.066: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:37:37.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:37:39.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:37:41.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222255, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:37:44.129: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:37:56.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2924" for this suite.
STEP: Destroying namespace "webhook-2924-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:23.663 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":218,"skipped":3736,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:37:56.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:37:57.891: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 28 14:38:02.969: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 28 14:38:05.235: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 28 14:38:11.435: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-2591 /apis/apps/v1/namespaces/deployment-2591/deployments/test-cleanup-deployment c32215bc-3144-430b-9a4e-95235ac281d7 1777400 1 2020-08-28 14:38:05 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  [{e2e.test Update apps/v1 2020-08-28 14:38:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-28 14:38:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40055fac28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-28 14:38:05 +0000 UTC,LastTransitionTime:2020-08-28 14:38:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-b4867b47f" has successfully progressed.,LastUpdateTime:2020-08-28 14:38:09 +0000 UTC,LastTransitionTime:2020-08-28 14:38:05 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 28 14:38:11.445: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f  deployment-2591 /apis/apps/v1/namespaces/deployment-2591/replicasets/test-cleanup-deployment-b4867b47f 371233fb-be02-4ad4-b1ca-7472e2af3536 1777388 1 2020-08-28 14:38:05 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c32215bc-3144-430b-9a4e-95235ac281d7 0x4005627060 0x4005627061}] []  [{kube-controller-manager Update apps/v1 2020-08-28 14:38:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 51 50 50 49 53 98 99 45 51 49 52 52 45 52 51 48 98 45 57 97 52 101 45 57 53 50 51 53 97 99 50 56 49 100 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40056270d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 28 14:38:11.454: INFO: Pod "test-cleanup-deployment-b4867b47f-vj84z" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-vj84z test-cleanup-deployment-b4867b47f- deployment-2591 /api/v1/namespaces/deployment-2591/pods/test-cleanup-deployment-b4867b47f-vj84z 9fc2997c-7054-4111-bbec-7f708f13e5af 1777386 0 2020-08-28 14:38:05 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 371233fb-be02-4ad4-b1ca-7472e2af3536 0x40055fb090 0x40055fb091}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 55 49 50 51 51 102 98 45 98 101 48 50 45 52 97 100 52 45 98 49 99 97 45 55 52 55 50 101 50 97 102 51 53 51 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bgz6g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bgz6g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bgz6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.65,StartTime:2020-08-28 14:38:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:38:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://65199b74b4d1d7f6a73dc64b574de89fc20184446994853a8a0a000a49c99145,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:38:11.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2591" for this suite.

• [SLOW TEST:14.532 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":219,"skipped":3746,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:38:11.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:38:11.965: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:38:12.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7883" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":220,"skipped":3780,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:38:12.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:38:12.727: INFO: Creating deployment "webserver-deployment"
Aug 28 14:38:12.752: INFO: Waiting for observed generation 1
Aug 28 14:38:14.773: INFO: Waiting for all required pods to come up
Aug 28 14:38:15.340: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 28 14:38:30.348: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 28 14:38:30.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:9, AvailableReplicas:9, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222309, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222309, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222309, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222292, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"webserver-deployment-84855cf797\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:38:32.375: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 28 14:38:32.388: INFO: Updating deployment webserver-deployment
Aug 28 14:38:32.389: INFO: Waiting for observed generation 2
Aug 28 14:38:34.624: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 28 14:38:34.631: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 28 14:38:34.637: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 28 14:38:34.651: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 28 14:38:34.651: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 28 14:38:34.656: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 28 14:38:34.663: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 28 14:38:34.663: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 28 14:38:34.674: INFO: Updating deployment webserver-deployment
Aug 28 14:38:34.674: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 28 14:38:35.516: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 28 14:38:38.702: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 28 14:38:39.336: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5935 /apis/apps/v1/namespaces/deployment-5935/deployments/webserver-deployment fbe23c21-0038-4d94-a957-bb663f58512a 1777747 3 2020-08-28 14:38:12 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-28 14:38:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-28 14:38:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002e88eb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-28 14:38:34 +0000 UTC,LastTransitionTime:2020-08-28 14:38:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-08-28 14:38:37 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 28 14:38:39.522: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-5935 /apis/apps/v1/namespaces/deployment-5935/replicasets/webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 1777743 3 2020-08-28 14:38:32 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment fbe23c21-0038-4d94-a957-bb663f58512a 0x4002f82a17 0x4002f82a18}] []  [{kube-controller-manager Update apps/v1 2020-08-28 14:38:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 98 101 50 51 99 50 49 45 48 48 51 56 45 52 100 57 52 45 97 57 53 55 45 98 98 54 54 51 102 53 56 53 49 50 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002f82a98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 28 14:38:39.522: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 28 14:38:39.523: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-5935 /apis/apps/v1/namespaces/deployment-5935/replicasets/webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 1777733 3 2020-08-28 14:38:12 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment fbe23c21-0038-4d94-a957-bb663f58512a 0x4002f82af7 0x4002f82af8}] []  [{kube-controller-manager Update apps/v1 2020-08-28 14:38:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 98 101 50 51 99 50 49 45 48 48 51 56 45 52 100 57 52 45 97 57 53 55 45 98 98 54 54 51 102 53 56 53 49 50 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002f82b68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 28 14:38:39.782: INFO: Pod "webserver-deployment-6676bcd6d4-6qzjp" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6qzjp webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-6qzjp c941dcf8-88fc-4350-850f-13f3af3ead69 1777717 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4003901d17 0x4003901d18}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.784: INFO: Pod "webserver-deployment-6676bcd6d4-9gzc9" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9gzc9 webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-9gzc9 87b86398-941a-4ccd-91bc-e4d392749d04 1777652 0 2020-08-28 14:38:32 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4003901e57 0x4003901e58}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-28 14:38:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.785: INFO: Pod "webserver-deployment-6676bcd6d4-b2mhp" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b2mhp webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-b2mhp d6137dec-d580-4eba-b4af-f9a699415b19 1777735 0 2020-08-28 14:38:36 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9a007 0x4002c9a008}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.787: INFO: Pod "webserver-deployment-6676bcd6d4-bbmbd" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bbmbd webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-bbmbd b04965c0-f254-4832-bdcd-c07cedf415ae 1777765 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9a147 0x4002c9a148}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-28 14:38:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.788: INFO: Pod "webserver-deployment-6676bcd6d4-f29hz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-f29hz webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-f29hz 59ed395a-f8a5-4b90-a28f-57aba7f788c4 1777720 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9a2f7 0x4002c9a2f8}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.790: INFO: Pod "webserver-deployment-6676bcd6d4-gqghw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gqghw webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-gqghw 82a06e6e-47f0-427c-81fd-3911fd9d265e 1777779 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9a437 0x4002c9a438}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-28 14:38:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.791: INFO: Pod "webserver-deployment-6676bcd6d4-kzcfr" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kzcfr webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-kzcfr 14e9ac22-79ba-4140-b788-205ec22752e2 1777722 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9a5e7 0x4002c9a5e8}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.793: INFO: Pod "webserver-deployment-6676bcd6d4-l72cc" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l72cc webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-l72cc 49d38eac-ded5-427b-a03e-cca19c47e958 1777675 0 2020-08-28 14:38:33 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9a727 0x4002c9a728}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-28 14:38:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.794: INFO: Pod "webserver-deployment-6676bcd6d4-mvrzg" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mvrzg webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-mvrzg 3e13490f-4da0-4ceb-ba9b-b308b6863253 1777661 0 2020-08-28 14:38:32 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9a8d7 0x4002c9a8d8}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-28 14:38:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.795: INFO: Pod "webserver-deployment-6676bcd6d4-nlb4g" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nlb4g webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-nlb4g add2349c-3213-41af-99b3-8dc6cf8d0ca7 1777755 0 2020-08-28 14:38:34 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9aa87 0x4002c9aa88}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-28 14:38:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.797: INFO: Pod "webserver-deployment-6676bcd6d4-qkccd" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qkccd webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-qkccd 81c4304d-d21e-49f1-89e3-89b024896682 1777644 0 2020-08-28 14:38:32 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9ac37 0x4002c9ac38}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-28 14:38:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.798: INFO: Pod "webserver-deployment-6676bcd6d4-vqtsd" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vqtsd webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-vqtsd d179db7e-5a75-4c3f-944a-e20c11d207ad 1777725 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9ade7 0x4002c9ade8}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.799: INFO: Pod "webserver-deployment-6676bcd6d4-zbk7c" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zbk7c webserver-deployment-6676bcd6d4- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-6676bcd6d4-zbk7c 15623ad5-354e-4cf8-a8dc-97a0060b7584 1777674 0 2020-08-28 14:38:33 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 52af6cf1-e5f3-4cc1-b5d3-fc46bfbd2b06 0x4002c9af27 0x4002c9af28}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 50 97 102 54 99 102 49 45 101 53 102 51 45 52 99 99 49 45 98 53 100 51 45 102 99 52 54 98 102 98 100 50 98 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-28 14:38:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.800: INFO: Pod "webserver-deployment-84855cf797-4br55" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-4br55 webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-4br55 d16613af-115a-49bc-855b-9cd0cd314f4d 1777597 0 2020-08-28 14:38:12 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9b0e7 0x4002c9b0e8}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 53 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.57,StartTime:2020-08-28 14:38:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:38:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://27cd096e371498cf5cd3a5e0513059ccef5d9fc1de66c5d28a69fe6c7d4f15bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.801: INFO: Pod "webserver-deployment-84855cf797-6b9hl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-6b9hl webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-6b9hl 4c9a98c0-f327-4a2c-83ab-44ffe70d9060 1777778 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9b297 0x4002c9b298}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-28 14:38:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.802: INFO: Pod "webserver-deployment-84855cf797-7pk99" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-7pk99 webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-7pk99 72bb82bd-80bb-46fd-b2df-a6e57383a70b 1777726 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9b437 0x4002c9b438}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.804: INFO: Pod "webserver-deployment-84855cf797-989pd" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-989pd webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-989pd 5f7df614-e03f-4fa1-95bd-c84c032863f3 1777588 0 2020-08-28 14:38:12 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9b577 0x4002c9b578}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 53 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.56,StartTime:2020-08-28 14:38:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:38:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://29186d55de60bbac0c3378b900d6385b09a2390e57e4770e3743331f96b8b647,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.805: INFO: Pod "webserver-deployment-84855cf797-9wfqw" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-9wfqw webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-9wfqw 2a7697f6-95be-4440-825c-dd794506441e 1777723 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9b737 0x4002c9b738}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.806: INFO: Pod "webserver-deployment-84855cf797-bgg6s" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bgg6s webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-bgg6s 40e2b9b3-66dd-4822-97ab-5cb1d43fd6c1 1777739 0 2020-08-28 14:38:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9b867 0x4002c9b868}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-28 14:38:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.807: INFO: Pod "webserver-deployment-84855cf797-c4vv2" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-c4vv2 webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-c4vv2 ac56b008-1985-41e3-b18f-a54f2a6db072 1777561 0 2020-08-28 14:38:12 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9b9f7 0x4002c9b9f8}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:25 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.68,StartTime:2020-08-28 14:38:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:38:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8db94eb9c0978eca2d57eacfba54e4badb5cb2b2503af126539104b76d6dcd94,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.809: INFO: Pod "webserver-deployment-84855cf797-cx7xp" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cx7xp webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-cx7xp f7f73eed-329e-4cf9-9fc9-ea6846b05b8e 1777581 0 2020-08-28 14:38:12 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9bba7 0x4002c9bba8}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.55,StartTime:2020-08-28 14:38:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:38:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://269131de96e9e341aa34fafde30c153ade2edba4f93b4a9df60340b759ec08fa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.810: INFO: Pod "webserver-deployment-84855cf797-dhgm8" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-dhgm8 webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-dhgm8 fbab8e62-5d32-43eb-9480-25b40ef53a47 1777547 0 2020-08-28 14:38:12 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9bd57 0x4002c9bd58}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.66,StartTime:2020-08-28 14:38:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:38:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://42b605c3e385a7932a11f9ebb173c8101e56ce63cccc09ce21b6112520306d1e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.811: INFO: Pod "webserver-deployment-84855cf797-f6jjv" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-f6jjv webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-f6jjv 3ac81d94-22ff-43da-b82b-0351963247a5 1777728 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x4002c9bf07 0x4002c9bf08}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.812: INFO: Pod "webserver-deployment-84855cf797-gtbx5" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-gtbx5 webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-gtbx5 0d113927-5b28-4b06-9566-896f7ab35f71 1777729 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041ea037 0x40041ea038}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.813: INFO: Pod "webserver-deployment-84855cf797-hd7lt" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hd7lt webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-hd7lt a4c00a82-035f-46f0-a16d-05e1c81d29e4 1777713 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041ea167 0x40041ea168}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.814: INFO: Pod "webserver-deployment-84855cf797-mhmx8" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mhmx8 webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-mhmx8 b4a2516c-aba3-446b-b7f7-065aaf3b0cc7 1777554 0 2020-08-28 14:38:12 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041ea297 0x40041ea298}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.67,StartTime:2020-08-28 14:38:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:38:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://11fcd3592e9a206bd341dffab814f491c5fb926c81a58c3f802f2ad60de220ad,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.815: INFO: Pod "webserver-deployment-84855cf797-mscvt" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mscvt webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-mscvt 6613a3a2-b4a4-4a65-8b22-d840061bbece 1777727 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041ea447 0x40041ea448}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.817: INFO: Pod "webserver-deployment-84855cf797-ngp5q" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ngp5q webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-ngp5q 372480ad-7267-410a-a34f-e2d0b35e620c 1777546 0 2020-08-28 14:38:12 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041ea577 0x40041ea578}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 53 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.54,StartTime:2020-08-28 14:38:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:38:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5698bdbb7bc39ab0af6ad43008b91c95fecde339496ae23ca19474198760e5f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.818: INFO: Pod "webserver-deployment-84855cf797-spwrr" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-spwrr webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-spwrr 4a9e62a5-ef85-40c5-82bd-d7f043638b82 1777761 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041ea727 0x40041ea728}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-28 14:38:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.819: INFO: Pod "webserver-deployment-84855cf797-wd2s4" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-wd2s4 webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-wd2s4 c55c3a0b-b4b0-4d46-9743-ffb51c348d3f 1777744 0 2020-08-28 14:38:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041ea8b7 0x40041ea8b8}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-28 14:38:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.821: INFO: Pod "webserver-deployment-84855cf797-wdhfr" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-wdhfr webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-wdhfr 0e85eef3-5f75-492f-98b9-fc7a4f55140b 1777515 0 2020-08-28 14:38:12 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041eaa47 0x40041eaa48}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:18 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 53 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.53,StartTime:2020-08-28 14:38:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-28 14:38:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://da1e517eacbd546854e101bc121d51c8556194271ae00c8647ce0249bbe56630,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.53,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.822: INFO: Pod "webserver-deployment-84855cf797-x86jn" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-x86jn webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-x86jn b495241f-7d97-40de-9944-96caec02b1f2 1777736 0 2020-08-28 14:38:34 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041eabf7 0x40041eabf8}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-28 14:38:36 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-28 14:38:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:38:39.823: INFO: Pod "webserver-deployment-84855cf797-xgz8n" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xgz8n webserver-deployment-84855cf797- deployment-5935 /api/v1/namespaces/deployment-5935/pods/webserver-deployment-84855cf797-xgz8n c7bdd0b5-bd0e-4fe3-87d0-f7e214cdf265 1777715 0 2020-08-28 14:38:35 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 4b4751f9-4828-4c36-b9ee-bcb81a09a2f7 0x40041ead87 0x40041ead88}] []  [{kube-controller-manager Update v1 2020-08-28 14:38:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 52 55 53 49 102 57 45 52 56 50 56 45 52 99 51 54 45 98 57 101 101 45 98 99 98 56 49 97 48 57 97 50 102 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4tnb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4tnb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4tnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-28 14:38:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:38:39.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5935" for this suite.

• [SLOW TEST:29.779 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":221,"skipped":3887,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:38:42.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 28 14:38:46.725: INFO: Waiting up to 5m0s for pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356" in namespace "downward-api-8223" to be "Succeeded or Failed"
Aug 28 14:38:47.033: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356": Phase="Pending", Reason="", readiness=false. Elapsed: 308.186915ms
Aug 28 14:38:50.180: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356": Phase="Pending", Reason="", readiness=false. Elapsed: 3.455273399s
Aug 28 14:38:53.130: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405160839s
Aug 28 14:38:55.566: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356": Phase="Pending", Reason="", readiness=false. Elapsed: 8.840986998s
Aug 28 14:38:57.665: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356": Phase="Pending", Reason="", readiness=false. Elapsed: 10.940216409s
Aug 28 14:39:00.372: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356": Phase="Pending", Reason="", readiness=false. Elapsed: 13.647374997s
Aug 28 14:39:02.855: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356": Phase="Pending", Reason="", readiness=false. Elapsed: 16.129878811s
Aug 28 14:39:05.004: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356": Phase="Running", Reason="", readiness=true. Elapsed: 18.278411379s
Aug 28 14:39:07.030: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.30477498s
STEP: Saw pod success
Aug 28 14:39:07.030: INFO: Pod "downward-api-e8115568-5195-42f4-81d7-df8e45566356" satisfied condition "Succeeded or Failed"
Aug 28 14:39:07.039: INFO: Trying to get logs from node kali-worker pod downward-api-e8115568-5195-42f4-81d7-df8e45566356 container dapi-container: 
STEP: delete the pod
Aug 28 14:39:07.114: INFO: Waiting for pod downward-api-e8115568-5195-42f4-81d7-df8e45566356 to disappear
Aug 28 14:39:07.122: INFO: Pod downward-api-e8115568-5195-42f4-81d7-df8e45566356 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:39:07.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8223" for this suite.

• [SLOW TEST:24.737 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3908,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:39:07.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 28 14:39:19.871: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:19.878: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:21.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:22.338: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:23.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:24.189: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:25.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:25.886: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:27.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:27.885: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:29.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:29.886: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:31.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:31.886: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:33.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:33.887: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:35.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:35.886: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:37.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:38.125: INFO: Pod pod-with-prestop-http-hook still exists
Aug 28 14:39:39.879: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 28 14:39:39.885: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:39:39.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6568" for this suite.

• [SLOW TEST:32.732 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3975,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:39:39.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 28 14:39:40.055: INFO: Waiting up to 5m0s for pod "pod-af8f8506-f84d-4598-9eff-aa1178c7e5eb" in namespace "emptydir-6901" to be "Succeeded or Failed"
Aug 28 14:39:40.071: INFO: Pod "pod-af8f8506-f84d-4598-9eff-aa1178c7e5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.601856ms
Aug 28 14:39:42.078: INFO: Pod "pod-af8f8506-f84d-4598-9eff-aa1178c7e5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022936252s
Aug 28 14:39:44.084: INFO: Pod "pod-af8f8506-f84d-4598-9eff-aa1178c7e5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028538952s
Aug 28 14:39:46.090: INFO: Pod "pod-af8f8506-f84d-4598-9eff-aa1178c7e5eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034540669s
STEP: Saw pod success
Aug 28 14:39:46.090: INFO: Pod "pod-af8f8506-f84d-4598-9eff-aa1178c7e5eb" satisfied condition "Succeeded or Failed"
Aug 28 14:39:46.094: INFO: Trying to get logs from node kali-worker2 pod pod-af8f8506-f84d-4598-9eff-aa1178c7e5eb container test-container: 
STEP: delete the pod
Aug 28 14:39:46.239: INFO: Waiting for pod pod-af8f8506-f84d-4598-9eff-aa1178c7e5eb to disappear
Aug 28 14:39:46.249: INFO: Pod pod-af8f8506-f84d-4598-9eff-aa1178c7e5eb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:39:46.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6901" for this suite.

• [SLOW TEST:6.351 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3982,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:39:46.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Aug 28 14:39:46.378: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Aug 28 14:39:46.391: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 28 14:39:46.393: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Aug 28 14:39:46.405: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 28 14:39:46.405: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Aug 28 14:39:46.512: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Aug 28 14:39:46.512: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Aug 28 14:39:53.948: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:39:53.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-166" for this suite.

• [SLOW TEST:7.817 seconds]
[sig-scheduling] LimitRange
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":225,"skipped":4032,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:39:54.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 28 14:40:03.716: INFO: Successfully updated pod "pod-update-9c9fd6b2-9be1-40be-9220-b5059ccd2cb3"
STEP: verifying the updated pod is in kubernetes
Aug 28 14:40:03.730: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:40:03.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1799" for this suite.

• [SLOW TEST:9.660 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":4044,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:40:03.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:40:04.938: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49" in namespace "downward-api-1430" to be "Succeeded or Failed"
Aug 28 14:40:05.192: INFO: Pod "downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49": Phase="Pending", Reason="", readiness=false. Elapsed: 253.955919ms
Aug 28 14:40:07.283: INFO: Pod "downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344443723s
Aug 28 14:40:09.467: INFO: Pod "downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52861346s
Aug 28 14:40:12.195: INFO: Pod "downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49": Phase="Pending", Reason="", readiness=false. Elapsed: 7.257067107s
Aug 28 14:40:14.259: INFO: Pod "downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49": Phase="Pending", Reason="", readiness=false. Elapsed: 9.320475917s
Aug 28 14:40:16.266: INFO: Pod "downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.327138357s
STEP: Saw pod success
Aug 28 14:40:16.266: INFO: Pod "downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49" satisfied condition "Succeeded or Failed"
Aug 28 14:40:16.303: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49 container client-container: 
STEP: delete the pod
Aug 28 14:40:16.635: INFO: Waiting for pod downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49 to disappear
Aug 28 14:40:16.664: INFO: Pod downwardapi-volume-34bcf4c8-6c42-4689-be90-a70df5d17b49 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:40:16.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1430" for this suite.

• [SLOW TEST:12.941 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":4049,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:40:16.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 28 14:40:17.055: INFO: Waiting up to 5m0s for pod "downward-api-d12d61ba-ad85-4dce-8ff9-4116b049b74d" in namespace "downward-api-8526" to be "Succeeded or Failed"
Aug 28 14:40:17.092: INFO: Pod "downward-api-d12d61ba-ad85-4dce-8ff9-4116b049b74d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.586753ms
Aug 28 14:40:19.142: INFO: Pod "downward-api-d12d61ba-ad85-4dce-8ff9-4116b049b74d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086904014s
Aug 28 14:40:21.150: INFO: Pod "downward-api-d12d61ba-ad85-4dce-8ff9-4116b049b74d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09548243s
STEP: Saw pod success
Aug 28 14:40:21.151: INFO: Pod "downward-api-d12d61ba-ad85-4dce-8ff9-4116b049b74d" satisfied condition "Succeeded or Failed"
Aug 28 14:40:21.157: INFO: Trying to get logs from node kali-worker2 pod downward-api-d12d61ba-ad85-4dce-8ff9-4116b049b74d container dapi-container: 
STEP: delete the pod
Aug 28 14:40:21.189: INFO: Waiting for pod downward-api-d12d61ba-ad85-4dce-8ff9-4116b049b74d to disappear
Aug 28 14:40:21.290: INFO: Pod downward-api-d12d61ba-ad85-4dce-8ff9-4116b049b74d no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:40:21.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8526" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":4062,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:40:21.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-e9621986-1069-4748-a1d1-eea19f79977a
STEP: Creating a pod to test consume secrets
Aug 28 14:40:21.487: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-61a4487c-2d2c-4aac-923b-8ba3effd332f" in namespace "projected-1476" to be "Succeeded or Failed"
Aug 28 14:40:21.505: INFO: Pod "pod-projected-secrets-61a4487c-2d2c-4aac-923b-8ba3effd332f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.410631ms
Aug 28 14:40:23.512: INFO: Pod "pod-projected-secrets-61a4487c-2d2c-4aac-923b-8ba3effd332f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024830975s
Aug 28 14:40:25.520: INFO: Pod "pod-projected-secrets-61a4487c-2d2c-4aac-923b-8ba3effd332f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033051697s
STEP: Saw pod success
Aug 28 14:40:25.520: INFO: Pod "pod-projected-secrets-61a4487c-2d2c-4aac-923b-8ba3effd332f" satisfied condition "Succeeded or Failed"
Aug 28 14:40:25.526: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-61a4487c-2d2c-4aac-923b-8ba3effd332f container projected-secret-volume-test: 
STEP: delete the pod
Aug 28 14:40:25.683: INFO: Waiting for pod pod-projected-secrets-61a4487c-2d2c-4aac-923b-8ba3effd332f to disappear
Aug 28 14:40:25.776: INFO: Pod pod-projected-secrets-61a4487c-2d2c-4aac-923b-8ba3effd332f no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:40:25.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1476" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":4095,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:40:25.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 28 14:40:25.965: INFO: Waiting up to 5m0s for pod "pod-c0d12382-4f62-4a79-af71-db641e40628d" in namespace "emptydir-7473" to be "Succeeded or Failed"
Aug 28 14:40:26.010: INFO: Pod "pod-c0d12382-4f62-4a79-af71-db641e40628d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.258035ms
Aug 28 14:40:28.038: INFO: Pod "pod-c0d12382-4f62-4a79-af71-db641e40628d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071894639s
Aug 28 14:40:30.133: INFO: Pod "pod-c0d12382-4f62-4a79-af71-db641e40628d": Phase="Running", Reason="", readiness=true. Elapsed: 4.166884751s
Aug 28 14:40:32.147: INFO: Pod "pod-c0d12382-4f62-4a79-af71-db641e40628d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18171094s
STEP: Saw pod success
Aug 28 14:40:32.148: INFO: Pod "pod-c0d12382-4f62-4a79-af71-db641e40628d" satisfied condition "Succeeded or Failed"
Aug 28 14:40:32.153: INFO: Trying to get logs from node kali-worker pod pod-c0d12382-4f62-4a79-af71-db641e40628d container test-container: 
STEP: delete the pod
Aug 28 14:40:32.308: INFO: Waiting for pod pod-c0d12382-4f62-4a79-af71-db641e40628d to disappear
Aug 28 14:40:32.329: INFO: Pod pod-c0d12382-4f62-4a79-af71-db641e40628d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:40:32.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7473" for this suite.

• [SLOW TEST:6.450 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":4105,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:40:32.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 28 14:40:38.999: INFO: Successfully updated pod "adopt-release-dj574"
STEP: Checking that the Job readopts the Pod
Aug 28 14:40:38.999: INFO: Waiting up to 15m0s for pod "adopt-release-dj574" in namespace "job-4989" to be "adopted"
Aug 28 14:40:39.035: INFO: Pod "adopt-release-dj574": Phase="Running", Reason="", readiness=true. Elapsed: 35.631434ms
Aug 28 14:40:41.042: INFO: Pod "adopt-release-dj574": Phase="Running", Reason="", readiness=true. Elapsed: 2.042797366s
Aug 28 14:40:41.043: INFO: Pod "adopt-release-dj574" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 28 14:40:41.559: INFO: Successfully updated pod "adopt-release-dj574"
STEP: Checking that the Job releases the Pod
Aug 28 14:40:41.560: INFO: Waiting up to 15m0s for pod "adopt-release-dj574" in namespace "job-4989" to be "released"
Aug 28 14:40:41.746: INFO: Pod "adopt-release-dj574": Phase="Running", Reason="", readiness=true. Elapsed: 186.368826ms
Aug 28 14:40:43.773: INFO: Pod "adopt-release-dj574": Phase="Running", Reason="", readiness=true. Elapsed: 2.213464545s
Aug 28 14:40:43.773: INFO: Pod "adopt-release-dj574" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:40:43.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4989" for this suite.

• [SLOW TEST:11.440 seconds]
[sig-apps] Job
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":231,"skipped":4122,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:40:43.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-d77c5656-f33f-49e0-a3af-3a0b258068e6
STEP: Creating configMap with name cm-test-opt-upd-14d4b035-a30c-4c6c-a5b8-9edd573406c3
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d77c5656-f33f-49e0-a3af-3a0b258068e6
STEP: Updating configmap cm-test-opt-upd-14d4b035-a30c-4c6c-a5b8-9edd573406c3
STEP: Creating configMap with name cm-test-opt-create-4152f9ad-eb15-4eea-bb36-c0316e9161da
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:42:07.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1088" for this suite.

• [SLOW TEST:84.251 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":4123,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:42:08.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:42:16.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9121" for this suite.

• [SLOW TEST:8.342 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4128,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:42:16.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4215 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4215;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4215 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4215;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4215.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4215.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4215.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4215.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4215.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4215.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4215.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4215.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4215.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4215.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4215.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.114.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.114.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.114.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.114.108_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4215 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4215;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4215 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4215;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4215.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4215.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4215.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4215.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4215.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4215.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4215.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4215.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4215.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4215.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4215.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4215.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.114.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.114.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.114.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.114.108_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 28 14:42:24.945: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:24.949: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:24.953: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:24.958: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:24.963: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:24.969: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:24.974: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:24.981: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:25.007: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:25.011: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:25.016: INFO: Unable to read jessie_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:25.020: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:25.024: INFO: Unable to read jessie_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:25.028: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:25.031: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:25.035: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:25.075: INFO: Lookups using dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4215 wheezy_tcp@dns-test-service.dns-4215 wheezy_udp@dns-test-service.dns-4215.svc wheezy_tcp@dns-test-service.dns-4215.svc wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4215 jessie_tcp@dns-test-service.dns-4215 jessie_udp@dns-test-service.dns-4215.svc jessie_tcp@dns-test-service.dns-4215.svc jessie_udp@_http._tcp.dns-test-service.dns-4215.svc jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc]

Aug 28 14:42:30.103: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.109: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.113: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.116: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.120: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.125: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.128: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.133: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.160: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.163: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.167: INFO: Unable to read jessie_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.171: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.175: INFO: Unable to read jessie_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.179: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.183: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.186: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:30.207: INFO: Lookups using dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4215 wheezy_tcp@dns-test-service.dns-4215 wheezy_udp@dns-test-service.dns-4215.svc wheezy_tcp@dns-test-service.dns-4215.svc wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4215 jessie_tcp@dns-test-service.dns-4215 jessie_udp@dns-test-service.dns-4215.svc jessie_tcp@dns-test-service.dns-4215.svc jessie_udp@_http._tcp.dns-test-service.dns-4215.svc jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc]

Aug 28 14:42:35.256: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.261: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.265: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.269: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.273: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.278: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.281: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.285: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.311: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.317: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.321: INFO: Unable to read jessie_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.324: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.332: INFO: Unable to read jessie_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.336: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.338: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.341: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:35.667: INFO: Lookups using dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4215 wheezy_tcp@dns-test-service.dns-4215 wheezy_udp@dns-test-service.dns-4215.svc wheezy_tcp@dns-test-service.dns-4215.svc wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4215 jessie_tcp@dns-test-service.dns-4215 jessie_udp@dns-test-service.dns-4215.svc jessie_tcp@dns-test-service.dns-4215.svc jessie_udp@_http._tcp.dns-test-service.dns-4215.svc jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc]

Aug 28 14:42:40.093: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.097: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.102: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.106: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.114: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.119: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.123: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.150: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.154: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.158: INFO: Unable to read jessie_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.162: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.223: INFO: Unable to read jessie_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.232: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.236: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:40.274: INFO: Lookups using dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4215 wheezy_tcp@dns-test-service.dns-4215 wheezy_udp@dns-test-service.dns-4215.svc wheezy_tcp@dns-test-service.dns-4215.svc wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4215 jessie_tcp@dns-test-service.dns-4215 jessie_udp@dns-test-service.dns-4215.svc jessie_tcp@dns-test-service.dns-4215.svc jessie_udp@_http._tcp.dns-test-service.dns-4215.svc jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc]

Aug 28 14:42:45.083: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.089: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.094: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.099: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.104: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.108: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.113: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.117: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.151: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.155: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.159: INFO: Unable to read jessie_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.163: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.167: INFO: Unable to read jessie_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.172: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.178: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.182: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:45.206: INFO: Lookups using dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4215 wheezy_tcp@dns-test-service.dns-4215 wheezy_udp@dns-test-service.dns-4215.svc wheezy_tcp@dns-test-service.dns-4215.svc wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4215 jessie_tcp@dns-test-service.dns-4215 jessie_udp@dns-test-service.dns-4215.svc jessie_tcp@dns-test-service.dns-4215.svc jessie_udp@_http._tcp.dns-test-service.dns-4215.svc jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc]

Aug 28 14:42:50.117: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.124: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.129: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.134: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.138: INFO: Unable to read wheezy_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.143: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.147: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.151: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.284: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.288: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.291: INFO: Unable to read jessie_udp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.294: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215 from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.298: INFO: Unable to read jessie_udp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.301: INFO: Unable to read jessie_tcp@dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.304: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.307: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc from pod dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb: the server could not find the requested resource (get pods dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb)
Aug 28 14:42:50.326: INFO: Lookups using dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4215 wheezy_tcp@dns-test-service.dns-4215 wheezy_udp@dns-test-service.dns-4215.svc wheezy_tcp@dns-test-service.dns-4215.svc wheezy_udp@_http._tcp.dns-test-service.dns-4215.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4215.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4215 jessie_tcp@dns-test-service.dns-4215 jessie_udp@dns-test-service.dns-4215.svc jessie_tcp@dns-test-service.dns-4215.svc jessie_udp@_http._tcp.dns-test-service.dns-4215.svc jessie_tcp@_http._tcp.dns-test-service.dns-4215.svc]

Aug 28 14:42:55.914: INFO: DNS probes using dns-4215/dns-test-0182b6ef-5354-4340-8e9c-5035aa5088cb succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:42:57.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4215" for this suite.

• [SLOW TEST:40.915 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":234,"skipped":4135,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:42:57.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 28 14:42:57.564: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 28 14:43:02.577: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:43:02.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4230" for this suite.

• [SLOW TEST:5.610 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":235,"skipped":4180,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:43:02.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:43:07.047: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:43:09.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222587, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222587, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222587, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222586, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:43:11.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222587, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222587, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222587, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734222586, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:43:14.561: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:43:14.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1551" for this suite.
STEP: Destroying namespace "webhook-1551-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.722 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":236,"skipped":4194,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:43:18.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:43:21.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config version'
Aug 28 14:43:25.028: INFO: stderr: ""
Aug 28 14:43:25.028: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.8\", GitCommit:\"9f2892aab98fe339f3bd70e3c470144299398ace\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T16:12:48Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.8\", GitCommit:\"9f2892aab98fe339f3bd70e3c470144299398ace\", GitTreeState:\"clean\", BuildDate:\"2020-08-14T21:13:38Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:43:25.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5498" for this suite.

• [SLOW TEST:7.940 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl version
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":237,"skipped":4201,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:43:26.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:43:28.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8994" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4218,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:43:28.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1286
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1286
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1286
Aug 28 14:43:28.607: INFO: Found 0 stateful pods, waiting for 1
Aug 28 14:43:38.613: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 28 14:43:38.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 14:43:41.090: INFO: stderr: "I0828 14:43:39.929762    3973 log.go:172] (0x400003a0b0) (0x40006d8000) Create stream\nI0828 14:43:39.932186    3973 log.go:172] (0x400003a0b0) (0x40006d8000) Stream added, broadcasting: 1\nI0828 14:43:39.945589    3973 log.go:172] (0x400003a0b0) Reply frame received for 1\nI0828 14:43:39.946535    3973 log.go:172] (0x400003a0b0) (0x400072a000) Create stream\nI0828 14:43:39.946616    3973 log.go:172] (0x400003a0b0) (0x400072a000) Stream added, broadcasting: 3\nI0828 14:43:39.948325    3973 log.go:172] (0x400003a0b0) Reply frame received for 3\nI0828 14:43:39.948572    3973 log.go:172] (0x400003a0b0) (0x40007f1680) Create stream\nI0828 14:43:39.948622    3973 log.go:172] (0x400003a0b0) (0x40007f1680) Stream added, broadcasting: 5\nI0828 14:43:39.949990    3973 log.go:172] (0x400003a0b0) Reply frame received for 5\nI0828 14:43:40.005361    3973 log.go:172] (0x400003a0b0) Data frame received for 5\nI0828 14:43:40.005549    3973 log.go:172] (0x40007f1680) (5) Data frame handling\nI0828 14:43:40.005955    3973 log.go:172] (0x40007f1680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 14:43:41.072075    3973 log.go:172] (0x400003a0b0) Data frame received for 3\nI0828 14:43:41.072178    3973 log.go:172] (0x400072a000) (3) Data frame handling\nI0828 14:43:41.072262    3973 log.go:172] (0x400072a000) (3) Data frame sent\nI0828 14:43:41.072323    3973 log.go:172] (0x400003a0b0) Data frame received for 3\nI0828 14:43:41.072375    3973 log.go:172] (0x400072a000) (3) Data frame handling\nI0828 14:43:41.072630    3973 log.go:172] (0x400003a0b0) Data frame received for 5\nI0828 14:43:41.072679    3973 log.go:172] (0x40007f1680) (5) Data frame handling\nI0828 14:43:41.075168    3973 log.go:172] (0x400003a0b0) Data frame received for 1\nI0828 14:43:41.075238    3973 log.go:172] (0x40006d8000) (1) Data frame handling\nI0828 14:43:41.075294    3973 log.go:172] (0x40006d8000) (1) Data frame sent\nI0828 14:43:41.076237    3973 log.go:172] (0x400003a0b0) (0x40006d8000) Stream removed, broadcasting: 1\nI0828 14:43:41.080129    3973 log.go:172] (0x400003a0b0) (0x40006d8000) Stream removed, broadcasting: 1\nI0828 14:43:41.080324    3973 log.go:172] (0x400003a0b0) (0x400072a000) Stream removed, broadcasting: 3\nI0828 14:43:41.080552    3973 log.go:172] (0x400003a0b0) Go away received\nI0828 14:43:41.080886    3973 log.go:172] (0x400003a0b0) (0x40007f1680) Stream removed, broadcasting: 5\n"
Aug 28 14:43:41.091: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 14:43:41.091: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 28 14:43:41.095: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 28 14:43:51.432: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 28 14:43:51.433: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 14:43:51.677: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999994812s
Aug 28 14:43:53.424: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.96023336s
Aug 28 14:43:54.845: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.212639293s
Aug 28 14:43:56.543: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.791850904s
Aug 28 14:43:57.841: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.094278124s
Aug 28 14:43:58.993: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.795798086s
Aug 28 14:44:00.103: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.643584806s
Aug 28 14:44:01.145: INFO: Verifying statefulset ss doesn't scale past 1 for another 533.600761ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1286
Aug 28 14:44:02.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:44:04.880: INFO: stderr: "I0828 14:44:04.766106    3996 log.go:172] (0x4000a7c420) (0x4000815180) Create stream\nI0828 14:44:04.769949    3996 log.go:172] (0x4000a7c420) (0x4000815180) Stream added, broadcasting: 1\nI0828 14:44:04.781793    3996 log.go:172] (0x4000a7c420) Reply frame received for 1\nI0828 14:44:04.782638    3996 log.go:172] (0x4000a7c420) (0x40009c0000) Create stream\nI0828 14:44:04.782738    3996 log.go:172] (0x4000a7c420) (0x40009c0000) Stream added, broadcasting: 3\nI0828 14:44:04.784501    3996 log.go:172] (0x4000a7c420) Reply frame received for 3\nI0828 14:44:04.785638    3996 log.go:172] (0x4000a7c420) (0x40009c00a0) Create stream\nI0828 14:44:04.785729    3996 log.go:172] (0x4000a7c420) (0x40009c00a0) Stream added, broadcasting: 5\nI0828 14:44:04.788438    3996 log.go:172] (0x4000a7c420) Reply frame received for 5\nI0828 14:44:04.863474    3996 log.go:172] (0x4000a7c420) Data frame received for 5\nI0828 14:44:04.863921    3996 log.go:172] (0x4000a7c420) Data frame received for 1\nI0828 14:44:04.864060    3996 log.go:172] (0x40009c00a0) (5) Data frame handling\nI0828 14:44:04.864176    3996 log.go:172] (0x4000815180) (1) Data frame handling\nI0828 14:44:04.864396    3996 log.go:172] (0x4000a7c420) Data frame received for 3\nI0828 14:44:04.864537    3996 log.go:172] (0x40009c0000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 14:44:04.865718    3996 log.go:172] (0x40009c00a0) (5) Data frame sent\nI0828 14:44:04.865791    3996 log.go:172] (0x40009c0000) (3) Data frame sent\nI0828 14:44:04.866115    3996 log.go:172] (0x4000815180) (1) Data frame sent\nI0828 14:44:04.866612    3996 log.go:172] (0x4000a7c420) Data frame received for 5\nI0828 14:44:04.866673    3996 log.go:172] (0x40009c00a0) (5) Data frame handling\nI0828 14:44:04.866818    3996 log.go:172] (0x4000a7c420) Data frame received for 3\nI0828 14:44:04.866877    3996 log.go:172] (0x40009c0000) (3) Data frame handling\nI0828 14:44:04.867394    3996 log.go:172] (0x4000a7c420) (0x4000815180) Stream removed, broadcasting: 1\nI0828 14:44:04.870618    3996 log.go:172] (0x4000a7c420) (0x4000815180) Stream removed, broadcasting: 1\nI0828 14:44:04.870910    3996 log.go:172] (0x4000a7c420) (0x40009c0000) Stream removed, broadcasting: 3\nI0828 14:44:04.871064    3996 log.go:172] (0x4000a7c420) Go away received\nI0828 14:44:04.871341    3996 log.go:172] (0x4000a7c420) (0x40009c00a0) Stream removed, broadcasting: 5\n"
Aug 28 14:44:04.880: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 28 14:44:04.880: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 28 14:44:04.989: INFO: Found 1 stateful pods, waiting for 3
Aug 28 14:44:15.072: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 14:44:15.072: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 28 14:44:15.072: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 28 14:44:15.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 14:44:16.645: INFO: stderr: "I0828 14:44:16.539830    4019 log.go:172] (0x4000a44bb0) (0x40008152c0) Create stream\nI0828 14:44:16.542705    4019 log.go:172] (0x4000a44bb0) (0x40008152c0) Stream added, broadcasting: 1\nI0828 14:44:16.553666    4019 log.go:172] (0x4000a44bb0) Reply frame received for 1\nI0828 14:44:16.554263    4019 log.go:172] (0x4000a44bb0) (0x4000bf4000) Create stream\nI0828 14:44:16.554329    4019 log.go:172] (0x4000a44bb0) (0x4000bf4000) Stream added, broadcasting: 3\nI0828 14:44:16.556072    4019 log.go:172] (0x4000a44bb0) Reply frame received for 3\nI0828 14:44:16.556457    4019 log.go:172] (0x4000a44bb0) (0x4000810000) Create stream\nI0828 14:44:16.556540    4019 log.go:172] (0x4000a44bb0) (0x4000810000) Stream added, broadcasting: 5\nI0828 14:44:16.558070    4019 log.go:172] (0x4000a44bb0) Reply frame received for 5\nI0828 14:44:16.622466    4019 log.go:172] (0x4000a44bb0) Data frame received for 3\nI0828 14:44:16.622912    4019 log.go:172] (0x4000a44bb0) Data frame received for 5\nI0828 14:44:16.623039    4019 log.go:172] (0x4000810000) (5) Data frame handling\nI0828 14:44:16.623249    4019 log.go:172] (0x4000a44bb0) Data frame received for 1\nI0828 14:44:16.623345    4019 log.go:172] (0x40008152c0) (1) Data frame handling\nI0828 14:44:16.623454    4019 log.go:172] (0x4000bf4000) (3) Data frame handling\nI0828 14:44:16.624095    4019 log.go:172] (0x4000bf4000) (3) Data frame sent\nI0828 14:44:16.624323    4019 log.go:172] (0x40008152c0) (1) Data frame sent\nI0828 14:44:16.624562    4019 log.go:172] (0x4000a44bb0) Data frame received for 3\nI0828 14:44:16.624671    4019 log.go:172] (0x4000bf4000) (3) Data frame handling\nI0828 14:44:16.624891    4019 log.go:172] (0x4000810000) (5) Data frame sent\nI0828 14:44:16.624973    4019 log.go:172] (0x4000a44bb0) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 14:44:16.625042    4019 log.go:172] (0x4000810000) (5) Data frame handling\nI0828 14:44:16.629098    4019 log.go:172] (0x4000a44bb0) (0x40008152c0) Stream removed, broadcasting: 1\nI0828 14:44:16.629807    4019 log.go:172] (0x4000a44bb0) Go away received\nI0828 14:44:16.632529    4019 log.go:172] (0x4000a44bb0) (0x40008152c0) Stream removed, broadcasting: 1\nI0828 14:44:16.633007    4019 log.go:172] (0x4000a44bb0) (0x4000bf4000) Stream removed, broadcasting: 3\nI0828 14:44:16.633237    4019 log.go:172] (0x4000a44bb0) (0x4000810000) Stream removed, broadcasting: 5\n"
Aug 28 14:44:16.646: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 14:44:16.646: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 28 14:44:16.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 14:44:18.221: INFO: stderr: "I0828 14:44:18.008430    4042 log.go:172] (0x40006d8000) (0x4000801220) Create stream\nI0828 14:44:18.011024    4042 log.go:172] (0x40006d8000) (0x4000801220) Stream added, broadcasting: 1\nI0828 14:44:18.026634    4042 log.go:172] (0x40006d8000) Reply frame received for 1\nI0828 14:44:18.027329    4042 log.go:172] (0x40006d8000) (0x4000801400) Create stream\nI0828 14:44:18.027391    4042 log.go:172] (0x40006d8000) (0x4000801400) Stream added, broadcasting: 3\nI0828 14:44:18.028813    4042 log.go:172] (0x40006d8000) Reply frame received for 3\nI0828 14:44:18.029163    4042 log.go:172] (0x40006d8000) (0x4000928000) Create stream\nI0828 14:44:18.029230    4042 log.go:172] (0x40006d8000) (0x4000928000) Stream added, broadcasting: 5\nI0828 14:44:18.030439    4042 log.go:172] (0x40006d8000) Reply frame received for 5\nI0828 14:44:18.101368    4042 log.go:172] (0x40006d8000) Data frame received for 5\nI0828 14:44:18.101546    4042 log.go:172] (0x4000928000) (5) Data frame handling\nI0828 14:44:18.101960    4042 log.go:172] (0x4000928000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 14:44:18.200146    4042 log.go:172] (0x40006d8000) Data frame received for 5\nI0828 14:44:18.200425    4042 log.go:172] (0x40006d8000) Data frame received for 3\nI0828 14:44:18.200543    4042 log.go:172] (0x4000801400) (3) Data frame handling\nI0828 14:44:18.200647    4042 log.go:172] (0x4000801400) (3) Data frame sent\nI0828 14:44:18.200884    4042 log.go:172] (0x40006d8000) Data frame received for 3\nI0828 14:44:18.200997    4042 log.go:172] (0x4000801400) (3) Data frame handling\nI0828 14:44:18.201294    4042 log.go:172] (0x4000928000) (5) Data frame handling\nI0828 14:44:18.201546    4042 log.go:172] (0x40006d8000) Data frame received for 1\nI0828 14:44:18.201618    4042 log.go:172] (0x4000801220) (1) Data frame handling\nI0828 14:44:18.201680    4042 log.go:172] (0x4000801220) (1) Data frame sent\nI0828 14:44:18.203818    4042 log.go:172] (0x40006d8000) (0x4000801220) Stream removed, broadcasting: 1\nI0828 14:44:18.209574    4042 log.go:172] (0x40006d8000) (0x4000801220) Stream removed, broadcasting: 1\nI0828 14:44:18.210096    4042 log.go:172] (0x40006d8000) (0x4000801400) Stream removed, broadcasting: 3\nI0828 14:44:18.210523    4042 log.go:172] (0x40006d8000) (0x4000928000) Stream removed, broadcasting: 5\n"
Aug 28 14:44:18.222: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 14:44:18.222: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 28 14:44:18.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 28 14:44:19.821: INFO: stderr: "I0828 14:44:19.581581    4066 log.go:172] (0x400003a370) (0x40007fd4a0) Create stream\nI0828 14:44:19.585398    4066 log.go:172] (0x400003a370) (0x40007fd4a0) Stream added, broadcasting: 1\nI0828 14:44:19.598160    4066 log.go:172] (0x400003a370) Reply frame received for 1\nI0828 14:44:19.598794    4066 log.go:172] (0x400003a370) (0x4000a0e000) Create stream\nI0828 14:44:19.598856    4066 log.go:172] (0x400003a370) (0x4000a0e000) Stream added, broadcasting: 3\nI0828 14:44:19.600372    4066 log.go:172] (0x400003a370) Reply frame received for 3\nI0828 14:44:19.600613    4066 log.go:172] (0x400003a370) (0x4000710000) Create stream\nI0828 14:44:19.600673    4066 log.go:172] (0x400003a370) (0x4000710000) Stream added, broadcasting: 5\nI0828 14:44:19.602106    4066 log.go:172] (0x400003a370) Reply frame received for 5\nI0828 14:44:19.673196    4066 log.go:172] (0x400003a370) Data frame received for 5\nI0828 14:44:19.673400    4066 log.go:172] (0x4000710000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0828 14:44:19.674686    4066 log.go:172] (0x4000710000) (5) Data frame sent\nI0828 14:44:19.797300    4066 log.go:172] (0x400003a370) Data frame received for 3\nI0828 14:44:19.797470    4066 log.go:172] (0x4000a0e000) (3) Data frame handling\nI0828 14:44:19.797561    4066 log.go:172] (0x4000a0e000) (3) Data frame sent\nI0828 14:44:19.797635    4066 log.go:172] (0x400003a370) Data frame received for 3\nI0828 14:44:19.797712    4066 log.go:172] (0x4000a0e000) (3) Data frame handling\nI0828 14:44:19.798087    4066 log.go:172] (0x400003a370) Data frame received for 5\nI0828 14:44:19.798180    4066 log.go:172] (0x4000710000) (5) Data frame handling\nI0828 14:44:19.799038    4066 log.go:172] (0x400003a370) Data frame received for 1\nI0828 14:44:19.799144    4066 log.go:172] (0x40007fd4a0) (1) Data frame handling\nI0828 14:44:19.799277    4066 log.go:172] (0x40007fd4a0) (1) Data frame sent\nI0828 14:44:19.799619    4066 log.go:172] (0x400003a370) (0x40007fd4a0) Stream removed, broadcasting: 1\nI0828 14:44:19.801692    4066 log.go:172] (0x400003a370) (0x40007fd4a0) Stream removed, broadcasting: 1\nI0828 14:44:19.801945    4066 log.go:172] (0x400003a370) (0x4000a0e000) Stream removed, broadcasting: 3\nI0828 14:44:19.806981    4066 log.go:172] (0x400003a370) (0x4000710000) Stream removed, broadcasting: 5\nI0828 14:44:19.807098    4066 log.go:172] (0x400003a370) Go away received\n"
Aug 28 14:44:19.822: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 28 14:44:19.822: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 28 14:44:19.822: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 14:44:19.828: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 28 14:44:29.844: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 28 14:44:29.844: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 28 14:44:29.844: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 28 14:44:29.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999993005s
Aug 28 14:44:30.880: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985804714s
Aug 28 14:44:31.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975226569s
Aug 28 14:44:32.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.963880729s
Aug 28 14:44:33.911: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.956020383s
Aug 28 14:44:34.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.943893816s
Aug 28 14:44:35.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.935387977s
Aug 28 14:44:36.943: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.923965107s
Aug 28 14:44:38.183: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.912368636s
Aug 28 14:44:39.449: INFO: Verifying statefulset ss doesn't scale past 3 for another 671.740381ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1286
Aug 28 14:44:40.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:44:42.195: INFO: stderr: "I0828 14:44:42.052145    4088 log.go:172] (0x4000aaa0b0) (0x4000986140) Create stream\nI0828 14:44:42.057015    4088 log.go:172] (0x4000aaa0b0) (0x4000986140) Stream added, broadcasting: 1\nI0828 14:44:42.069269    4088 log.go:172] (0x4000aaa0b0) Reply frame received for 1\nI0828 14:44:42.070640    4088 log.go:172] (0x4000aaa0b0) (0x4000630aa0) Create stream\nI0828 14:44:42.070780    4088 log.go:172] (0x4000aaa0b0) (0x4000630aa0) Stream added, broadcasting: 3\nI0828 14:44:42.072451    4088 log.go:172] (0x4000aaa0b0) Reply frame received for 3\nI0828 14:44:42.072834    4088 log.go:172] (0x4000aaa0b0) (0x4000630b40) Create stream\nI0828 14:44:42.072922    4088 log.go:172] (0x4000aaa0b0) (0x4000630b40) Stream added, broadcasting: 5\nI0828 14:44:42.074216    4088 log.go:172] (0x4000aaa0b0) Reply frame received for 5\nI0828 14:44:42.126005    4088 log.go:172] (0x4000aaa0b0) Data frame received for 5\nI0828 14:44:42.126298    4088 log.go:172] (0x4000630b40) (5) Data frame handling\nI0828 14:44:42.126630    4088 log.go:172] (0x4000630b40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 14:44:42.175185    4088 log.go:172] (0x4000aaa0b0) Data frame received for 5\nI0828 14:44:42.175477    4088 log.go:172] (0x4000630b40) (5) Data frame handling\nI0828 14:44:42.175821    4088 log.go:172] (0x4000aaa0b0) Data frame received for 3\nI0828 14:44:42.175991    4088 log.go:172] (0x4000630aa0) (3) Data frame handling\nI0828 14:44:42.176177    4088 log.go:172] (0x4000630aa0) (3) Data frame sent\nI0828 14:44:42.176367    4088 log.go:172] (0x4000aaa0b0) Data frame received for 3\nI0828 14:44:42.176524    4088 log.go:172] (0x4000630aa0) (3) Data frame handling\nI0828 14:44:42.176903    4088 log.go:172] (0x4000aaa0b0) Data frame received for 1\nI0828 14:44:42.177036    4088 log.go:172] (0x4000986140) (1) Data frame handling\nI0828 14:44:42.177144    4088 log.go:172] (0x4000986140) (1) Data frame sent\nI0828 14:44:42.178076    4088 log.go:172] (0x4000aaa0b0) (0x4000986140) Stream removed, broadcasting: 1\nI0828 14:44:42.180857    4088 log.go:172] (0x4000aaa0b0) Go away received\nI0828 14:44:42.183044    4088 log.go:172] (0x4000aaa0b0) (0x4000986140) Stream removed, broadcasting: 1\nI0828 14:44:42.183905    4088 log.go:172] (0x4000aaa0b0) (0x4000630aa0) Stream removed, broadcasting: 3\nI0828 14:44:42.184581    4088 log.go:172] (0x4000aaa0b0) (0x4000630b40) Stream removed, broadcasting: 5\n"
Aug 28 14:44:42.196: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 28 14:44:42.196: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 28 14:44:42.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:44:43.750: INFO: stderr: "I0828 14:44:43.543059    4111 log.go:172] (0x4000aba630) (0x4000690280) Create stream\nI0828 14:44:43.547150    4111 log.go:172] (0x4000aba630) (0x4000690280) Stream added, broadcasting: 1\nI0828 14:44:43.556568    4111 log.go:172] (0x4000aba630) Reply frame received for 1\nI0828 14:44:43.557234    4111 log.go:172] (0x4000aba630) (0x400071a000) Create stream\nI0828 14:44:43.557294    4111 log.go:172] (0x4000aba630) (0x400071a000) Stream added, broadcasting: 3\nI0828 14:44:43.558682    4111 log.go:172] (0x4000aba630) Reply frame received for 3\nI0828 14:44:43.559174    4111 log.go:172] (0x4000aba630) (0x4000738000) Create stream\nI0828 14:44:43.559291    4111 log.go:172] (0x4000aba630) (0x4000738000) Stream added, broadcasting: 5\nI0828 14:44:43.560906    4111 log.go:172] (0x4000aba630) Reply frame received for 5\nI0828 14:44:43.615558    4111 log.go:172] (0x4000aba630) Data frame received for 5\nI0828 14:44:43.615717    4111 log.go:172] (0x4000738000) (5) Data frame handling\nI0828 14:44:43.616072    4111 log.go:172] (0x4000738000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0828 14:44:43.730647    4111 log.go:172] (0x4000aba630) Data frame received for 3\nI0828 14:44:43.730930    4111 log.go:172] (0x400071a000) (3) Data frame handling\nI0828 14:44:43.731099    4111 log.go:172] (0x400071a000) (3) Data frame sent\nI0828 14:44:43.731250    4111 log.go:172] (0x4000aba630) Data frame received for 3\nI0828 14:44:43.731362    4111 log.go:172] (0x400071a000) (3) Data frame handling\nI0828 14:44:43.732318    4111 log.go:172] (0x4000aba630) Data frame received for 1\nI0828 14:44:43.732448    4111 log.go:172] (0x4000690280) (1) Data frame handling\nI0828 14:44:43.732578    4111 log.go:172] (0x4000690280) (1) Data frame sent\nI0828 14:44:43.732912    4111 log.go:172] (0x4000aba630) Data frame received for 5\nI0828 14:44:43.733039    4111 log.go:172] (0x4000738000) (5) Data frame handling\nI0828 14:44:43.736313    4111 log.go:172] (0x4000aba630) (0x4000690280) Stream removed, broadcasting: 1\nI0828 14:44:43.737496    4111 log.go:172] (0x4000aba630) Go away received\nI0828 14:44:43.741138    4111 log.go:172] (0x4000aba630) (0x4000690280) Stream removed, broadcasting: 1\nI0828 14:44:43.741378    4111 log.go:172] (0x4000aba630) (0x400071a000) Stream removed, broadcasting: 3\nI0828 14:44:43.741600    4111 log.go:172] (0x4000aba630) (0x4000738000) Stream removed, broadcasting: 5\n"
Aug 28 14:44:43.752: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 28 14:44:43.752: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 28 14:44:43.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:44:45.301: INFO: rc: 1
Aug 28 14:44:45.301: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 28 14:44:55.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:44:56.666: INFO: rc: 1
Aug 28 14:44:56.666: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 28 14:45:06.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:45:07.941: INFO: rc: 1
Aug 28 14:45:07.942: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:45:17.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:45:21.131: INFO: rc: 1
Aug 28 14:45:21.132: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:45:31.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:45:33.963: INFO: rc: 1
Aug 28 14:45:33.964: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:45:43.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:45:45.189: INFO: rc: 1
Aug 28 14:45:45.189: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:45:55.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:45:56.441: INFO: rc: 1
Aug 28 14:45:56.441: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:46:06.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:46:08.411: INFO: rc: 1
Aug 28 14:46:08.411: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:46:18.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:46:19.622: INFO: rc: 1
Aug 28 14:46:19.622: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:46:29.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:46:30.919: INFO: rc: 1
Aug 28 14:46:30.919: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:46:40.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:46:42.181: INFO: rc: 1
Aug 28 14:46:42.181: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:46:52.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:46:53.412: INFO: rc: 1
Aug 28 14:46:53.413: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:47:03.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:47:05.077: INFO: rc: 1
Aug 28 14:47:05.077: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:47:15.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:47:16.334: INFO: rc: 1
Aug 28 14:47:16.335: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:47:26.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:47:27.601: INFO: rc: 1
Aug 28 14:47:27.602: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:47:37.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:47:39.652: INFO: rc: 1
Aug 28 14:47:39.652: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:47:49.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:47:50.914: INFO: rc: 1
Aug 28 14:47:50.914: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:48:00.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:48:02.393: INFO: rc: 1
Aug 28 14:48:02.393: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:48:12.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:48:14.385: INFO: rc: 1
Aug 28 14:48:14.385: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:48:24.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:48:25.665: INFO: rc: 1
Aug 28 14:48:25.665: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:48:35.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:48:36.856: INFO: rc: 1
Aug 28 14:48:36.857: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:48:46.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:48:48.064: INFO: rc: 1
Aug 28 14:48:48.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:48:58.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:48:59.363: INFO: rc: 1
Aug 28 14:48:59.363: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:49:09.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:49:10.555: INFO: rc: 1
Aug 28 14:49:10.555: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:49:20.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:49:21.807: INFO: rc: 1
Aug 28 14:49:21.808: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:49:31.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:49:33.040: INFO: rc: 1
Aug 28 14:49:33.040: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 28 14:49:43.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1286 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 28 14:49:44.583: INFO: rc: 1
Aug 28 14:49:44.583: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Aug 28 14:49:44.584: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 28 14:49:44.603: INFO: Deleting all statefulset in ns statefulset-1286
Aug 28 14:49:44.606: INFO: Scaling statefulset ss to 0
Aug 28 14:49:44.620: INFO: Waiting for statefulset status.replicas updated to 0
Aug 28 14:49:44.622: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:49:45.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1286" for this suite.

• [SLOW TEST:377.093 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":239,"skipped":4229,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:49:45.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 28 14:49:47.529: INFO: Waiting up to 5m0s for pod "pod-6b64b644-7e87-4a8a-bd9f-0bc428f17e77" in namespace "emptydir-8550" to be "Succeeded or Failed"
Aug 28 14:49:47.589: INFO: Pod "pod-6b64b644-7e87-4a8a-bd9f-0bc428f17e77": Phase="Pending", Reason="", readiness=false. Elapsed: 59.250232ms
Aug 28 14:49:49.594: INFO: Pod "pod-6b64b644-7e87-4a8a-bd9f-0bc428f17e77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064272469s
Aug 28 14:49:51.797: INFO: Pod "pod-6b64b644-7e87-4a8a-bd9f-0bc428f17e77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268000536s
Aug 28 14:49:54.037: INFO: Pod "pod-6b64b644-7e87-4a8a-bd9f-0bc428f17e77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.507195528s
STEP: Saw pod success
Aug 28 14:49:54.037: INFO: Pod "pod-6b64b644-7e87-4a8a-bd9f-0bc428f17e77" satisfied condition "Succeeded or Failed"
Aug 28 14:49:54.040: INFO: Trying to get logs from node kali-worker2 pod pod-6b64b644-7e87-4a8a-bd9f-0bc428f17e77 container test-container: 
STEP: delete the pod
Aug 28 14:49:54.274: INFO: Waiting for pod pod-6b64b644-7e87-4a8a-bd9f-0bc428f17e77 to disappear
Aug 28 14:49:54.282: INFO: Pod pod-6b64b644-7e87-4a8a-bd9f-0bc428f17e77 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:49:54.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8550" for this suite.

• [SLOW TEST:8.799 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4239,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:49:54.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:49:55.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe" in namespace "downward-api-1377" to be "Succeeded or Failed"
Aug 28 14:49:55.496: INFO: Pod "downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 238.583889ms
Aug 28 14:49:57.504: INFO: Pod "downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247142169s
Aug 28 14:49:59.719: INFO: Pod "downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.46211306s
Aug 28 14:50:01.895: INFO: Pod "downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.637487461s
Aug 28 14:50:03.973: INFO: Pod "downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.716022112s
STEP: Saw pod success
Aug 28 14:50:03.974: INFO: Pod "downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe" satisfied condition "Succeeded or Failed"
Aug 28 14:50:03.981: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe container client-container: 
STEP: delete the pod
Aug 28 14:50:04.063: INFO: Waiting for pod downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe to disappear
Aug 28 14:50:04.170: INFO: Pod downwardapi-volume-f72034e2-1e7e-403d-b08c-d8eebc1d8bfe no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:50:04.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1377" for this suite.

• [SLOW TEST:9.848 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4248,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:50:04.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-0615cc92-1ce1-48b3-b4de-d5ba20bfa985
STEP: Creating a pod to test consume secrets
Aug 28 14:50:04.438: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c40ff88b-7c2b-47a6-914f-cdeb12f7257c" in namespace "projected-1998" to be "Succeeded or Failed"
Aug 28 14:50:04.554: INFO: Pod "pod-projected-secrets-c40ff88b-7c2b-47a6-914f-cdeb12f7257c": Phase="Pending", Reason="", readiness=false. Elapsed: 115.065596ms
Aug 28 14:50:06.559: INFO: Pod "pod-projected-secrets-c40ff88b-7c2b-47a6-914f-cdeb12f7257c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120118393s
Aug 28 14:50:08.565: INFO: Pod "pod-projected-secrets-c40ff88b-7c2b-47a6-914f-cdeb12f7257c": Phase="Running", Reason="", readiness=true. Elapsed: 4.126336811s
Aug 28 14:50:10.571: INFO: Pod "pod-projected-secrets-c40ff88b-7c2b-47a6-914f-cdeb12f7257c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13206436s
STEP: Saw pod success
Aug 28 14:50:10.571: INFO: Pod "pod-projected-secrets-c40ff88b-7c2b-47a6-914f-cdeb12f7257c" satisfied condition "Succeeded or Failed"
Aug 28 14:50:10.574: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-c40ff88b-7c2b-47a6-914f-cdeb12f7257c container projected-secret-volume-test: 
STEP: delete the pod
Aug 28 14:50:10.610: INFO: Waiting for pod pod-projected-secrets-c40ff88b-7c2b-47a6-914f-cdeb12f7257c to disappear
Aug 28 14:50:10.630: INFO: Pod pod-projected-secrets-c40ff88b-7c2b-47a6-914f-cdeb12f7257c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:50:10.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1998" for this suite.

• [SLOW TEST:6.440 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4260,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:50:10.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-17f27c3d-a869-4eca-a6c4-9981f423b117
Aug 28 14:50:10.768: INFO: Pod name my-hostname-basic-17f27c3d-a869-4eca-a6c4-9981f423b117: Found 0 pods out of 1
Aug 28 14:50:15.775: INFO: Pod name my-hostname-basic-17f27c3d-a869-4eca-a6c4-9981f423b117: Found 1 pods out of 1
Aug 28 14:50:15.775: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-17f27c3d-a869-4eca-a6c4-9981f423b117" are running
Aug 28 14:50:15.779: INFO: Pod "my-hostname-basic-17f27c3d-a869-4eca-a6c4-9981f423b117-g6599" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 14:50:10 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 14:50:14 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 14:50:14 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-28 14:50:10 +0000 UTC Reason: Message:}])
Aug 28 14:50:15.779: INFO: Trying to dial the pod
Aug 28 14:50:20.792: INFO: Controller my-hostname-basic-17f27c3d-a869-4eca-a6c4-9981f423b117: Got expected result from replica 1 [my-hostname-basic-17f27c3d-a869-4eca-a6c4-9981f423b117-g6599]: "my-hostname-basic-17f27c3d-a869-4eca-a6c4-9981f423b117-g6599", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:50:20.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6310" for this suite.

• [SLOW TEST:10.160 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":243,"skipped":4281,"failed":0}
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:50:20.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Aug 28 14:50:20.880: INFO: Waiting up to 5m0s for pod "client-containers-70b97f89-ec40-4edb-9a76-e0b55bf03abd" in namespace "containers-1166" to be "Succeeded or Failed"
Aug 28 14:50:20.885: INFO: Pod "client-containers-70b97f89-ec40-4edb-9a76-e0b55bf03abd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.514834ms
Aug 28 14:50:22.892: INFO: Pod "client-containers-70b97f89-ec40-4edb-9a76-e0b55bf03abd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0114682s
Aug 28 14:50:24.897: INFO: Pod "client-containers-70b97f89-ec40-4edb-9a76-e0b55bf03abd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01726353s
Aug 28 14:50:26.904: INFO: Pod "client-containers-70b97f89-ec40-4edb-9a76-e0b55bf03abd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023667318s
STEP: Saw pod success
Aug 28 14:50:26.904: INFO: Pod "client-containers-70b97f89-ec40-4edb-9a76-e0b55bf03abd" satisfied condition "Succeeded or Failed"
Aug 28 14:50:26.908: INFO: Trying to get logs from node kali-worker pod client-containers-70b97f89-ec40-4edb-9a76-e0b55bf03abd container test-container: 
STEP: delete the pod
Aug 28 14:50:26.943: INFO: Waiting for pod client-containers-70b97f89-ec40-4edb-9a76-e0b55bf03abd to disappear
Aug 28 14:50:26.979: INFO: Pod client-containers-70b97f89-ec40-4edb-9a76-e0b55bf03abd no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:50:26.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1166" for this suite.

• [SLOW TEST:6.185 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4287,"failed":0}
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:50:26.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2150
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-2150
I0828 14:50:27.515758      11 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2150, replica count: 2
I0828 14:50:30.566695      11 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:50:33.567149      11 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0828 14:50:36.567545      11 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 28 14:50:36.567: INFO: Creating new exec pod
Aug 28 14:50:45.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-2150 execpodrf2w9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 28 14:50:47.367: INFO: stderr: "I0828 14:50:47.258685    4768 log.go:172] (0x400066e160) (0x4000584140) Create stream\nI0828 14:50:47.264404    4768 log.go:172] (0x400066e160) (0x4000584140) Stream added, broadcasting: 1\nI0828 14:50:47.275033    4768 log.go:172] (0x400066e160) Reply frame received for 1\nI0828 14:50:47.275635    4768 log.go:172] (0x400066e160) (0x4000584280) Create stream\nI0828 14:50:47.275687    4768 log.go:172] (0x400066e160) (0x4000584280) Stream added, broadcasting: 3\nI0828 14:50:47.277328    4768 log.go:172] (0x400066e160) Reply frame received for 3\nI0828 14:50:47.277789    4768 log.go:172] (0x400066e160) (0x40004fca00) Create stream\nI0828 14:50:47.277881    4768 log.go:172] (0x400066e160) (0x40004fca00) Stream added, broadcasting: 5\nI0828 14:50:47.279227    4768 log.go:172] (0x400066e160) Reply frame received for 5\nI0828 14:50:47.344160    4768 log.go:172] (0x400066e160) Data frame received for 3\nI0828 14:50:47.344506    4768 log.go:172] (0x400066e160) Data frame received for 5\nI0828 14:50:47.344649    4768 log.go:172] (0x40004fca00) (5) Data frame handling\nI0828 14:50:47.344801    4768 log.go:172] (0x4000584280) (3) Data frame handling\nI0828 14:50:47.345240    4768 log.go:172] (0x400066e160) Data frame received for 1\nI0828 14:50:47.345311    4768 log.go:172] (0x4000584140) (1) Data frame handling\nI0828 14:50:47.347316    4768 log.go:172] (0x40004fca00) (5) Data frame sent\nI0828 14:50:47.347797    4768 log.go:172] (0x4000584140) (1) Data frame sent\nI0828 14:50:47.348193    4768 log.go:172] (0x400066e160) Data frame received for 5\nI0828 14:50:47.348260    4768 log.go:172] (0x40004fca00) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0828 14:50:47.350917    4768 log.go:172] (0x400066e160) (0x4000584140) Stream removed, broadcasting: 1\nI0828 14:50:47.351733    4768 log.go:172] (0x400066e160) Go away received\nI0828 14:50:47.354613    4768 log.go:172] (0x400066e160) (0x4000584140) Stream removed, broadcasting: 1\nI0828 14:50:47.355247    4768 log.go:172] (0x400066e160) (0x4000584280) Stream removed, broadcasting: 3\nI0828 14:50:47.355797    4768 log.go:172] (0x400066e160) (0x40004fca00) Stream removed, broadcasting: 5\n"
Aug 28 14:50:47.367: INFO: stdout: ""
Aug 28 14:50:47.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-2150 execpodrf2w9 -- /bin/sh -x -c nc -zv -t -w 2 10.104.35.49 80'
Aug 28 14:50:48.767: INFO: stderr: "I0828 14:50:48.658963    4791 log.go:172] (0x40007ba370) (0x40005fed20) Create stream\nI0828 14:50:48.662253    4791 log.go:172] (0x40007ba370) (0x40005fed20) Stream added, broadcasting: 1\nI0828 14:50:48.674270    4791 log.go:172] (0x40007ba370) Reply frame received for 1\nI0828 14:50:48.675587    4791 log.go:172] (0x40007ba370) (0x40007f34a0) Create stream\nI0828 14:50:48.675716    4791 log.go:172] (0x40007ba370) (0x40007f34a0) Stream added, broadcasting: 3\nI0828 14:50:48.677316    4791 log.go:172] (0x40007ba370) Reply frame received for 3\nI0828 14:50:48.677538    4791 log.go:172] (0x40007ba370) (0x400072c000) Create stream\nI0828 14:50:48.677585    4791 log.go:172] (0x40007ba370) (0x400072c000) Stream added, broadcasting: 5\nI0828 14:50:48.678623    4791 log.go:172] (0x40007ba370) Reply frame received for 5\nI0828 14:50:48.749555    4791 log.go:172] (0x40007ba370) Data frame received for 3\nI0828 14:50:48.749936    4791 log.go:172] (0x40007ba370) Data frame received for 5\nI0828 14:50:48.750115    4791 log.go:172] (0x400072c000) (5) Data frame handling\nI0828 14:50:48.750423    4791 log.go:172] (0x40007f34a0) (3) Data frame handling\nI0828 14:50:48.750743    4791 log.go:172] (0x40007ba370) Data frame received for 1\nI0828 14:50:48.750813    4791 log.go:172] (0x40005fed20) (1) Data frame handling\n+ nc -zv -t -w 2 10.104.35.49 80\nConnection to 10.104.35.49 80 port [tcp/http] succeeded!\nI0828 14:50:48.751402    4791 log.go:172] (0x400072c000) (5) Data frame sent\nI0828 14:50:48.751640    4791 log.go:172] (0x40007ba370) Data frame received for 5\nI0828 14:50:48.751708    4791 log.go:172] (0x400072c000) (5) Data frame handling\nI0828 14:50:48.751847    4791 log.go:172] (0x40005fed20) (1) Data frame sent\nI0828 14:50:48.753663    4791 log.go:172] (0x40007ba370) (0x40005fed20) Stream removed, broadcasting: 1\nI0828 14:50:48.755088    4791 log.go:172] (0x40007ba370) Go away received\nI0828 14:50:48.758313    4791 log.go:172] (0x40007ba370) (0x40005fed20) Stream removed, broadcasting: 1\nI0828 14:50:48.758575    4791 log.go:172] (0x40007ba370) (0x40007f34a0) Stream removed, broadcasting: 3\nI0828 14:50:48.758760    4791 log.go:172] (0x40007ba370) (0x400072c000) Stream removed, broadcasting: 5\n"
Aug 28 14:50:48.767: INFO: stdout: ""
Aug 28 14:50:48.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-2150 execpodrf2w9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30733'
Aug 28 14:50:50.732: INFO: stderr: "I0828 14:50:50.630479    4813 log.go:172] (0x4000a20000) (0x4000827360) Create stream\nI0828 14:50:50.634844    4813 log.go:172] (0x4000a20000) (0x4000827360) Stream added, broadcasting: 1\nI0828 14:50:50.647591    4813 log.go:172] (0x4000a20000) Reply frame received for 1\nI0828 14:50:50.648849    4813 log.go:172] (0x4000a20000) (0x400072a000) Create stream\nI0828 14:50:50.648998    4813 log.go:172] (0x4000a20000) (0x400072a000) Stream added, broadcasting: 3\nI0828 14:50:50.650740    4813 log.go:172] (0x4000a20000) Reply frame received for 3\nI0828 14:50:50.651286    4813 log.go:172] (0x4000a20000) (0x4000827540) Create stream\nI0828 14:50:50.651412    4813 log.go:172] (0x4000a20000) (0x4000827540) Stream added, broadcasting: 5\nI0828 14:50:50.653068    4813 log.go:172] (0x4000a20000) Reply frame received for 5\nI0828 14:50:50.703619    4813 log.go:172] (0x4000a20000) Data frame received for 5\nI0828 14:50:50.703847    4813 log.go:172] (0x4000a20000) Data frame received for 3\nI0828 14:50:50.704002    4813 log.go:172] (0x4000a20000) Data frame received for 1\nI0828 14:50:50.704077    4813 log.go:172] (0x400072a000) (3) Data frame handling\nI0828 14:50:50.704443    4813 log.go:172] (0x4000827360) (1) Data frame handling\nI0828 14:50:50.704649    4813 log.go:172] (0x4000827540) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 30733\nConnection to 172.18.0.15 30733 port [tcp/30733] succeeded!\nI0828 14:50:50.708365    4813 log.go:172] (0x4000827540) (5) Data frame sent\nI0828 14:50:50.708558    4813 log.go:172] (0x4000827360) (1) Data frame sent\nI0828 14:50:50.708808    4813 log.go:172] (0x4000a20000) Data frame received for 5\nI0828 14:50:50.708895    4813 log.go:172] (0x4000a20000) (0x4000827360) Stream removed, broadcasting: 1\nI0828 14:50:50.709161    4813 log.go:172] (0x4000827540) (5) Data frame handling\nI0828 14:50:50.709452    4813 log.go:172] (0x4000a20000) Go away received\nI0828 14:50:50.717863    4813 log.go:172] (0x4000a20000) (0x4000827360) Stream removed, broadcasting: 1\nI0828 14:50:50.718494    4813 log.go:172] (0x4000a20000) (0x400072a000) Stream removed, broadcasting: 3\nI0828 14:50:50.718889    4813 log.go:172] (0x4000a20000) (0x4000827540) Stream removed, broadcasting: 5\n"
Aug 28 14:50:50.733: INFO: stdout: ""
Aug 28 14:50:50.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-2150 execpodrf2w9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30733'
Aug 28 14:50:52.177: INFO: stderr: "I0828 14:50:52.077665    4835 log.go:172] (0x400003a0b0) (0x400080f180) Create stream\nI0828 14:50:52.083135    4835 log.go:172] (0x400003a0b0) (0x400080f180) Stream added, broadcasting: 1\nI0828 14:50:52.098186    4835 log.go:172] (0x400003a0b0) Reply frame received for 1\nI0828 14:50:52.099060    4835 log.go:172] (0x400003a0b0) (0x4000644000) Create stream\nI0828 14:50:52.099153    4835 log.go:172] (0x400003a0b0) (0x4000644000) Stream added, broadcasting: 3\nI0828 14:50:52.101145    4835 log.go:172] (0x400003a0b0) Reply frame received for 3\nI0828 14:50:52.101546    4835 log.go:172] (0x400003a0b0) (0x40006440a0) Create stream\nI0828 14:50:52.101632    4835 log.go:172] (0x400003a0b0) (0x40006440a0) Stream added, broadcasting: 5\nI0828 14:50:52.103138    4835 log.go:172] (0x400003a0b0) Reply frame received for 5\nI0828 14:50:52.160877    4835 log.go:172] (0x400003a0b0) Data frame received for 5\nI0828 14:50:52.161143    4835 log.go:172] (0x40006440a0) (5) Data frame handling\nI0828 14:50:52.161432    4835 log.go:172] (0x400003a0b0) Data frame received for 3\nI0828 14:50:52.161592    4835 log.go:172] (0x4000644000) (3) Data frame handling\nI0828 14:50:52.161845    4835 log.go:172] (0x40006440a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 30733\nConnection to 172.18.0.13 30733 port [tcp/30733] succeeded!\nI0828 14:50:52.162713    4835 log.go:172] (0x400003a0b0) Data frame received for 5\nI0828 14:50:52.162787    4835 log.go:172] (0x40006440a0) (5) Data frame handling\nI0828 14:50:52.163625    4835 log.go:172] (0x400003a0b0) Data frame received for 1\nI0828 14:50:52.163685    4835 log.go:172] (0x400080f180) (1) Data frame handling\nI0828 14:50:52.163747    4835 log.go:172] (0x400080f180) (1) Data frame sent\nI0828 14:50:52.164516    4835 log.go:172] (0x400003a0b0) (0x400080f180) Stream removed, broadcasting: 1\nI0828 14:50:52.167529    4835 log.go:172] (0x400003a0b0) Go away received\nI0828 14:50:52.169129    4835 log.go:172] (0x400003a0b0) (0x400080f180) Stream removed, broadcasting: 1\nI0828 14:50:52.169424    4835 log.go:172] (0x400003a0b0) (0x4000644000) Stream removed, broadcasting: 3\nI0828 14:50:52.169608    4835 log.go:172] (0x400003a0b0) (0x40006440a0) Stream removed, broadcasting: 5\n"
Aug 28 14:50:52.178: INFO: stdout: ""
Aug 28 14:50:52.178: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:50:52.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2150" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:25.260 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":245,"skipped":4287,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:50:52.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-ca9c84e9-f020-4dd2-91bc-2e8039b40acd
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:50:58.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2270" for this suite.

• [SLOW TEST:6.346 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4289,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:50:58.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-c90511f4-4139-41cb-bfb8-b530ab657f69
STEP: Creating a pod to test consume configMaps
Aug 28 14:50:58.939: INFO: Waiting up to 5m0s for pod "pod-configmaps-31accc39-f7f8-4045-8c87-cce56b174ac8" in namespace "configmap-448" to be "Succeeded or Failed"
Aug 28 14:50:59.199: INFO: Pod "pod-configmaps-31accc39-f7f8-4045-8c87-cce56b174ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 260.197193ms
Aug 28 14:51:01.203: INFO: Pod "pod-configmaps-31accc39-f7f8-4045-8c87-cce56b174ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264377574s
Aug 28 14:51:03.409: INFO: Pod "pod-configmaps-31accc39-f7f8-4045-8c87-cce56b174ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470167541s
Aug 28 14:51:05.413: INFO: Pod "pod-configmaps-31accc39-f7f8-4045-8c87-cce56b174ac8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.47418552s
STEP: Saw pod success
Aug 28 14:51:05.413: INFO: Pod "pod-configmaps-31accc39-f7f8-4045-8c87-cce56b174ac8" satisfied condition "Succeeded or Failed"
Aug 28 14:51:05.462: INFO: Trying to get logs from node kali-worker pod pod-configmaps-31accc39-f7f8-4045-8c87-cce56b174ac8 container configmap-volume-test: 
STEP: delete the pod
Aug 28 14:51:05.492: INFO: Waiting for pod pod-configmaps-31accc39-f7f8-4045-8c87-cce56b174ac8 to disappear
Aug 28 14:51:05.520: INFO: Pod pod-configmaps-31accc39-f7f8-4045-8c87-cce56b174ac8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:51:05.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-448" for this suite.

• [SLOW TEST:6.937 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4305,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:51:05.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-afe19e79-a246-4ad6-9adf-cc571f184795
STEP: Creating a pod to test consume secrets
Aug 28 14:51:05.649: INFO: Waiting up to 5m0s for pod "pod-secrets-4d8d5d53-ccbb-4c88-8b68-db75501e27af" in namespace "secrets-4357" to be "Succeeded or Failed"
Aug 28 14:51:05.653: INFO: Pod "pod-secrets-4d8d5d53-ccbb-4c88-8b68-db75501e27af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.406259ms
Aug 28 14:51:07.658: INFO: Pod "pod-secrets-4d8d5d53-ccbb-4c88-8b68-db75501e27af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009117391s
Aug 28 14:51:09.664: INFO: Pod "pod-secrets-4d8d5d53-ccbb-4c88-8b68-db75501e27af": Phase="Running", Reason="", readiness=true. Elapsed: 4.0145733s
Aug 28 14:51:11.774: INFO: Pod "pod-secrets-4d8d5d53-ccbb-4c88-8b68-db75501e27af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125074915s
STEP: Saw pod success
Aug 28 14:51:11.775: INFO: Pod "pod-secrets-4d8d5d53-ccbb-4c88-8b68-db75501e27af" satisfied condition "Succeeded or Failed"
Aug 28 14:51:12.032: INFO: Trying to get logs from node kali-worker pod pod-secrets-4d8d5d53-ccbb-4c88-8b68-db75501e27af container secret-volume-test: 
STEP: delete the pod
Aug 28 14:51:12.270: INFO: Waiting for pod pod-secrets-4d8d5d53-ccbb-4c88-8b68-db75501e27af to disappear
Aug 28 14:51:12.275: INFO: Pod pod-secrets-4d8d5d53-ccbb-4c88-8b68-db75501e27af no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:51:12.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4357" for this suite.

• [SLOW TEST:6.749 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4328,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:51:12.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:51:12.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-915541e2-c3e9-4469-b6bf-4b0a3e129e76" in namespace "downward-api-2087" to be "Succeeded or Failed"
Aug 28 14:51:12.505: INFO: Pod "downwardapi-volume-915541e2-c3e9-4469-b6bf-4b0a3e129e76": Phase="Pending", Reason="", readiness=false. Elapsed: 128.65313ms
Aug 28 14:51:14.510: INFO: Pod "downwardapi-volume-915541e2-c3e9-4469-b6bf-4b0a3e129e76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133942852s
Aug 28 14:51:16.910: INFO: Pod "downwardapi-volume-915541e2-c3e9-4469-b6bf-4b0a3e129e76": Phase="Running", Reason="", readiness=true. Elapsed: 4.533874397s
Aug 28 14:51:18.916: INFO: Pod "downwardapi-volume-915541e2-c3e9-4469-b6bf-4b0a3e129e76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.539569463s
STEP: Saw pod success
Aug 28 14:51:18.916: INFO: Pod "downwardapi-volume-915541e2-c3e9-4469-b6bf-4b0a3e129e76" satisfied condition "Succeeded or Failed"
Aug 28 14:51:18.927: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-915541e2-c3e9-4469-b6bf-4b0a3e129e76 container client-container: 
STEP: delete the pod
Aug 28 14:51:19.020: INFO: Waiting for pod downwardapi-volume-915541e2-c3e9-4469-b6bf-4b0a3e129e76 to disappear
Aug 28 14:51:19.027: INFO: Pod downwardapi-volume-915541e2-c3e9-4469-b6bf-4b0a3e129e76 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:51:19.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2087" for this suite.

• [SLOW TEST:6.751 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4333,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:51:19.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-d22b1ccb-365c-4bad-8025-2656e085d2e4
STEP: Creating a pod to test consume configMaps
Aug 28 14:51:19.156: INFO: Waiting up to 5m0s for pod "pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a" in namespace "configmap-3490" to be "Succeeded or Failed"
Aug 28 14:51:19.187: INFO: Pod "pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.817391ms
Aug 28 14:51:21.223: INFO: Pod "pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067252921s
Aug 28 14:51:23.458: INFO: Pod "pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301866379s
Aug 28 14:51:25.505: INFO: Pod "pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349188289s
Aug 28 14:51:27.523: INFO: Pod "pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.367120651s
STEP: Saw pod success
Aug 28 14:51:27.523: INFO: Pod "pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a" satisfied condition "Succeeded or Failed"
Aug 28 14:51:27.527: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a container configmap-volume-test: 
STEP: delete the pod
Aug 28 14:51:27.621: INFO: Waiting for pod pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a to disappear
Aug 28 14:51:27.629: INFO: Pod pod-configmaps-13670abe-79c9-439c-be85-cdffc861531a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:51:27.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3490" for this suite.

• [SLOW TEST:8.604 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4333,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:51:27.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 28 14:51:27.803: INFO: Waiting up to 5m0s for pod "pod-334ea36e-9d7c-408c-99db-a7d792595a8f" in namespace "emptydir-1226" to be "Succeeded or Failed"
Aug 28 14:51:27.814: INFO: Pod "pod-334ea36e-9d7c-408c-99db-a7d792595a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.277909ms
Aug 28 14:51:29.962: INFO: Pod "pod-334ea36e-9d7c-408c-99db-a7d792595a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159013573s
Aug 28 14:51:31.992: INFO: Pod "pod-334ea36e-9d7c-408c-99db-a7d792595a8f": Phase="Running", Reason="", readiness=true. Elapsed: 4.188713852s
Aug 28 14:51:33.996: INFO: Pod "pod-334ea36e-9d7c-408c-99db-a7d792595a8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.193370167s
STEP: Saw pod success
Aug 28 14:51:33.997: INFO: Pod "pod-334ea36e-9d7c-408c-99db-a7d792595a8f" satisfied condition "Succeeded or Failed"
Aug 28 14:51:34.000: INFO: Trying to get logs from node kali-worker pod pod-334ea36e-9d7c-408c-99db-a7d792595a8f container test-container: 
STEP: delete the pod
Aug 28 14:51:34.064: INFO: Waiting for pod pod-334ea36e-9d7c-408c-99db-a7d792595a8f to disappear
Aug 28 14:51:34.130: INFO: Pod pod-334ea36e-9d7c-408c-99db-a7d792595a8f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:51:34.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1226" for this suite.

• [SLOW TEST:6.500 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4336,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:51:34.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:51:45.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8646" for this suite.

• [SLOW TEST:11.915 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":252,"skipped":4341,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:51:46.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 28 14:51:46.267: INFO: Waiting up to 5m0s for pod "pod-efeba4dc-6cfb-4988-80bc-076f8e4bd05f" in namespace "emptydir-6490" to be "Succeeded or Failed"
Aug 28 14:51:46.373: INFO: Pod "pod-efeba4dc-6cfb-4988-80bc-076f8e4bd05f": Phase="Pending", Reason="", readiness=false. Elapsed: 106.096597ms
Aug 28 14:51:48.456: INFO: Pod "pod-efeba4dc-6cfb-4988-80bc-076f8e4bd05f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188995439s
Aug 28 14:51:50.460: INFO: Pod "pod-efeba4dc-6cfb-4988-80bc-076f8e4bd05f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.19295901s
STEP: Saw pod success
Aug 28 14:51:50.460: INFO: Pod "pod-efeba4dc-6cfb-4988-80bc-076f8e4bd05f" satisfied condition "Succeeded or Failed"
Aug 28 14:51:50.462: INFO: Trying to get logs from node kali-worker pod pod-efeba4dc-6cfb-4988-80bc-076f8e4bd05f container test-container: 
STEP: delete the pod
Aug 28 14:51:50.831: INFO: Waiting for pod pod-efeba4dc-6cfb-4988-80bc-076f8e4bd05f to disappear
Aug 28 14:51:50.954: INFO: Pod pod-efeba4dc-6cfb-4988-80bc-076f8e4bd05f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:51:50.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6490" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4351,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:51:50.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Aug 28 14:51:51.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:53:46.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7649" for this suite.

• [SLOW TEST:115.718 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":254,"skipped":4386,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:53:46.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 28 14:53:53.232: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 28 14:53:55.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734223233, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734223233, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734223233, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734223232, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 28 14:53:57.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734223233, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734223233, loc:(*time.Location)(0x74b2e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734223233, loc:(*time.Location)(0x74b2e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734223232, loc:(*time.Location)(0x74b2e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 28 14:54:00.765: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:54:02.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4208" for this suite.
STEP: Destroying namespace "webhook-4208-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.531 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":255,"skipped":4388,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:54:03.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:54:13.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2110" for this suite.

• [SLOW TEST:10.247 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4400,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:54:13.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:54:14.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7377" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":257,"skipped":4474,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:54:14.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:54:31.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5866" for this suite.

• [SLOW TEST:17.413 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":258,"skipped":4476,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:54:31.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 28 14:54:46.460: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 14:54:46.714: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 14:54:48.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 14:54:49.025: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 14:54:50.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 14:54:50.720: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 14:54:52.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 14:54:52.721: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 14:54:54.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 14:54:54.720: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 14:54:56.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 14:54:56.723: INFO: Pod pod-with-poststart-http-hook still exists
Aug 28 14:54:58.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 28 14:54:58.721: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:54:58.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4658" for this suite.

• [SLOW TEST:26.984 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4489,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:54:58.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Aug 28 14:54:58.893: INFO: Waiting up to 5m0s for pod "var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607" in namespace "var-expansion-7615" to be "Succeeded or Failed"
Aug 28 14:54:58.902: INFO: Pod "var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607": Phase="Pending", Reason="", readiness=false. Elapsed: 9.470724ms
Aug 28 14:55:01.286: INFO: Pod "var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392663371s
Aug 28 14:55:03.293: INFO: Pod "var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400481954s
Aug 28 14:55:05.544: INFO: Pod "var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651166474s
Aug 28 14:55:07.549: INFO: Pod "var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.656425364s
STEP: Saw pod success
Aug 28 14:55:07.549: INFO: Pod "var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607" satisfied condition "Succeeded or Failed"
Aug 28 14:55:07.554: INFO: Trying to get logs from node kali-worker2 pod var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607 container dapi-container: 
STEP: delete the pod
Aug 28 14:55:07.598: INFO: Waiting for pod var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607 to disappear
Aug 28 14:55:07.626: INFO: Pod var-expansion-035074e4-eb9e-4eec-99de-1ef796a51607 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:55:07.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7615" for this suite.

• [SLOW TEST:8.895 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4520,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:55:07.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:55:11.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1226" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4535,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:55:11.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-6a67ff72-75ad-4321-a51e-4b6f63a39901
STEP: Creating a pod to test consume configMaps
Aug 28 14:55:11.951: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ac3a945-33fc-4ee2-a217-fb26aefd70f1" in namespace "configmap-6726" to be "Succeeded or Failed"
Aug 28 14:55:12.026: INFO: Pod "pod-configmaps-7ac3a945-33fc-4ee2-a217-fb26aefd70f1": Phase="Pending", Reason="", readiness=false. Elapsed: 74.742607ms
Aug 28 14:55:14.124: INFO: Pod "pod-configmaps-7ac3a945-33fc-4ee2-a217-fb26aefd70f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173115877s
Aug 28 14:55:16.280: INFO: Pod "pod-configmaps-7ac3a945-33fc-4ee2-a217-fb26aefd70f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.329030919s
Aug 28 14:55:18.286: INFO: Pod "pod-configmaps-7ac3a945-33fc-4ee2-a217-fb26aefd70f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.335194552s
STEP: Saw pod success
Aug 28 14:55:18.286: INFO: Pod "pod-configmaps-7ac3a945-33fc-4ee2-a217-fb26aefd70f1" satisfied condition "Succeeded or Failed"
Aug 28 14:55:18.291: INFO: Trying to get logs from node kali-worker pod pod-configmaps-7ac3a945-33fc-4ee2-a217-fb26aefd70f1 container configmap-volume-test: 
STEP: delete the pod
Aug 28 14:55:18.410: INFO: Waiting for pod pod-configmaps-7ac3a945-33fc-4ee2-a217-fb26aefd70f1 to disappear
Aug 28 14:55:18.429: INFO: Pod pod-configmaps-7ac3a945-33fc-4ee2-a217-fb26aefd70f1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:55:18.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6726" for this suite.

• [SLOW TEST:6.610 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4550,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:55:18.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0828 14:55:28.723886      11 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 28 14:55:28.724: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:55:28.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1026" for this suite.

• [SLOW TEST:10.295 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":263,"skipped":4554,"failed":0}
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:55:28.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:55:28.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2" in namespace "projected-1903" to be "Succeeded or Failed"
Aug 28 14:55:28.866: INFO: Pod "downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.24855ms
Aug 28 14:55:30.891: INFO: Pod "downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04778746s
Aug 28 14:55:32.898: INFO: Pod "downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054498356s
Aug 28 14:55:35.283: INFO: Pod "downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44023392s
Aug 28 14:55:37.290: INFO: Pod "downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.446562583s
STEP: Saw pod success
Aug 28 14:55:37.290: INFO: Pod "downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2" satisfied condition "Succeeded or Failed"
Aug 28 14:55:37.294: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2 container client-container: 
STEP: delete the pod
Aug 28 14:55:37.361: INFO: Waiting for pod downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2 to disappear
Aug 28 14:55:37.369: INFO: Pod downwardapi-volume-baa7e6a3-3cc6-496f-91ad-d5c793bc02c2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:55:37.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1903" for this suite.

• [SLOW TEST:8.645 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4554,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:55:37.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 28 14:55:37.532: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:55:53.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4831" for this suite.

• [SLOW TEST:15.642 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":265,"skipped":4590,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:55:53.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 28 14:56:00.935: INFO: 10 pods remaining
Aug 28 14:56:00.935: INFO: 10 pods has nil DeletionTimestamp
Aug 28 14:56:00.935: INFO: 
Aug 28 14:56:03.160: INFO: 8 pods remaining
Aug 28 14:56:03.160: INFO: 0 pods has nil DeletionTimestamp
Aug 28 14:56:03.160: INFO: 
Aug 28 14:56:04.551: INFO: 0 pods remaining
Aug 28 14:56:04.552: INFO: 0 pods has nil DeletionTimestamp
Aug 28 14:56:04.552: INFO: 
Aug 28 14:56:05.904: INFO: 0 pods remaining
Aug 28 14:56:05.904: INFO: 0 pods has nil DeletionTimestamp
Aug 28 14:56:05.904: INFO: 
STEP: Gathering metrics
W0828 14:56:08.012423      11 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 28 14:56:08.012: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:56:08.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2268" for this suite.

• [SLOW TEST:16.445 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":266,"skipped":4593,"failed":0}
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:56:09.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 28 14:56:10.196: INFO: Created pod &Pod{ObjectMeta:{dns-4946  dns-4946 /api/v1/namespaces/dns-4946/pods/dns-4946 b05e42b5-6ba5-430f-8875-53acbc448f1c 1782675 0 2020-08-28 14:56:10 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-08-28 14:56:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nb9tp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nb9tp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nb9tp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 28 14:56:10.332: INFO: The status of Pod dns-4946 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:56:12.464: INFO: The status of Pod dns-4946 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:56:14.368: INFO: The status of Pod dns-4946 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:56:16.382: INFO: The status of Pod dns-4946 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:56:18.806: INFO: The status of Pod dns-4946 is Pending, waiting for it to be Running (with Ready = true)
Aug 28 14:56:20.348: INFO: The status of Pod dns-4946 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 28 14:56:20.348: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4946 PodName:dns-4946 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:56:20.348: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:56:20.473456      11 log.go:172] (0x400132c4d0) (0x40020e9540) Create stream
I0828 14:56:20.473683      11 log.go:172] (0x400132c4d0) (0x40020e9540) Stream added, broadcasting: 1
I0828 14:56:20.476605      11 log.go:172] (0x400132c4d0) Reply frame received for 1
I0828 14:56:20.476839      11 log.go:172] (0x400132c4d0) (0x40026b50e0) Create stream
I0828 14:56:20.476930      11 log.go:172] (0x400132c4d0) (0x40026b50e0) Stream added, broadcasting: 3
I0828 14:56:20.478227      11 log.go:172] (0x400132c4d0) Reply frame received for 3
I0828 14:56:20.478360      11 log.go:172] (0x400132c4d0) (0x4000996320) Create stream
I0828 14:56:20.478423      11 log.go:172] (0x400132c4d0) (0x4000996320) Stream added, broadcasting: 5
I0828 14:56:20.479478      11 log.go:172] (0x400132c4d0) Reply frame received for 5
I0828 14:56:20.552176      11 log.go:172] (0x400132c4d0) Data frame received for 5
I0828 14:56:20.552372      11 log.go:172] (0x4000996320) (5) Data frame handling
I0828 14:56:20.552553      11 log.go:172] (0x400132c4d0) Data frame received for 3
I0828 14:56:20.552636      11 log.go:172] (0x40026b50e0) (3) Data frame handling
I0828 14:56:20.552823      11 log.go:172] (0x40026b50e0) (3) Data frame sent
I0828 14:56:20.552925      11 log.go:172] (0x400132c4d0) Data frame received for 3
I0828 14:56:20.552998      11 log.go:172] (0x40026b50e0) (3) Data frame handling
I0828 14:56:20.558243      11 log.go:172] (0x400132c4d0) Data frame received for 1
I0828 14:56:20.558349      11 log.go:172] (0x40020e9540) (1) Data frame handling
I0828 14:56:20.558435      11 log.go:172] (0x40020e9540) (1) Data frame sent
I0828 14:56:20.558526      11 log.go:172] (0x400132c4d0) (0x40020e9540) Stream removed, broadcasting: 1
I0828 14:56:20.558668      11 log.go:172] (0x400132c4d0) Go away received
I0828 14:56:20.559045      11 log.go:172] (0x400132c4d0) (0x40020e9540) Stream removed, broadcasting: 1
I0828 14:56:20.559195      11 log.go:172] (0x400132c4d0) (0x40026b50e0) Stream removed, broadcasting: 3
I0828 14:56:20.559316      11 log.go:172] (0x400132c4d0) (0x4000996320) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 28 14:56:20.559: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4946 PodName:dns-4946 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 28 14:56:20.560: INFO: >>> kubeConfig: /root/.kube/config
I0828 14:56:20.644656      11 log.go:172] (0x40031d4420) (0x40017a1360) Create stream
I0828 14:56:20.644974      11 log.go:172] (0x40031d4420) (0x40017a1360) Stream added, broadcasting: 1
I0828 14:56:20.648799      11 log.go:172] (0x40031d4420) Reply frame received for 1
I0828 14:56:20.649045      11 log.go:172] (0x40031d4420) (0x40026b5180) Create stream
I0828 14:56:20.649160      11 log.go:172] (0x40031d4420) (0x40026b5180) Stream added, broadcasting: 3
I0828 14:56:20.650932      11 log.go:172] (0x40031d4420) Reply frame received for 3
I0828 14:56:20.651129      11 log.go:172] (0x40031d4420) (0x40017a14a0) Create stream
I0828 14:56:20.651198      11 log.go:172] (0x40031d4420) (0x40017a14a0) Stream added, broadcasting: 5
I0828 14:56:20.652581      11 log.go:172] (0x40031d4420) Reply frame received for 5
I0828 14:56:20.730275      11 log.go:172] (0x40031d4420) Data frame received for 3
I0828 14:56:20.730372      11 log.go:172] (0x40026b5180) (3) Data frame handling
I0828 14:56:20.730454      11 log.go:172] (0x40026b5180) (3) Data frame sent
I0828 14:56:20.733632      11 log.go:172] (0x40031d4420) Data frame received for 3
I0828 14:56:20.733752      11 log.go:172] (0x40026b5180) (3) Data frame handling
I0828 14:56:20.733891      11 log.go:172] (0x40031d4420) Data frame received for 5
I0828 14:56:20.734067      11 log.go:172] (0x40017a14a0) (5) Data frame handling
I0828 14:56:20.734776      11 log.go:172] (0x40031d4420) Data frame received for 1
I0828 14:56:20.734861      11 log.go:172] (0x40017a1360) (1) Data frame handling
I0828 14:56:20.734948      11 log.go:172] (0x40017a1360) (1) Data frame sent
I0828 14:56:20.735026      11 log.go:172] (0x40031d4420) (0x40017a1360) Stream removed, broadcasting: 1
I0828 14:56:20.735137      11 log.go:172] (0x40031d4420) Go away received
I0828 14:56:20.735547      11 log.go:172] (0x40031d4420) (0x40017a1360) Stream removed, broadcasting: 1
I0828 14:56:20.735631      11 log.go:172] (0x40031d4420) (0x40026b5180) Stream removed, broadcasting: 3
I0828 14:56:20.735704      11 log.go:172] (0x40031d4420) (0x40017a14a0) Stream removed, broadcasting: 5
Aug 28 14:56:20.735: INFO: Deleting pod dns-4946...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:56:22.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4946" for this suite.

• [SLOW TEST:12.903 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":267,"skipped":4593,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:56:22.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 28 14:56:23.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81" in namespace "downward-api-276" to be "Succeeded or Failed"
Aug 28 14:56:23.413: INFO: Pod "downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81": Phase="Pending", Reason="", readiness=false. Elapsed: 15.984362ms
Aug 28 14:56:25.631: INFO: Pod "downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233625195s
Aug 28 14:56:27.672: INFO: Pod "downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274752382s
Aug 28 14:56:29.984: INFO: Pod "downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81": Phase="Running", Reason="", readiness=true. Elapsed: 6.587278871s
Aug 28 14:56:31.992: INFO: Pod "downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.594529246s
STEP: Saw pod success
Aug 28 14:56:31.992: INFO: Pod "downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81" satisfied condition "Succeeded or Failed"
Aug 28 14:56:32.013: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81 container client-container: 
STEP: delete the pod
Aug 28 14:56:32.050: INFO: Waiting for pod downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81 to disappear
Aug 28 14:56:32.061: INFO: Pod downwardapi-volume-d3cd2edc-645e-48e0-b0e0-92ee0f4c1e81 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:56:32.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-276" for this suite.

• [SLOW TEST:9.713 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4618,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:56:32.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 28 14:56:32.159: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:56:46.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3034" for this suite.

• [SLOW TEST:14.460 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":269,"skipped":4630,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:56:46.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-b946267d-57f6-478e-bcaa-02ad47ef2743
STEP: Creating secret with name s-test-opt-upd-41fd25e7-d8ea-4f67-ade3-716f2ca84f24
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b946267d-57f6-478e-bcaa-02ad47ef2743
STEP: Updating secret s-test-opt-upd-41fd25e7-d8ea-4f67-ade3-716f2ca84f24
STEP: Creating secret with name s-test-opt-create-e893d45d-e78f-439e-97c7-c2fa85aa725d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:58:21.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-105" for this suite.

• [SLOW TEST:94.938 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4647,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:58:21.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:58:27.957: INFO: Waiting up to 5m0s for pod "client-envvars-d5ef3872-5019-4395-834c-49dc67d2c7c4" in namespace "pods-7852" to be "Succeeded or Failed"
Aug 28 14:58:28.392: INFO: Pod "client-envvars-d5ef3872-5019-4395-834c-49dc67d2c7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 434.745142ms
Aug 28 14:58:30.431: INFO: Pod "client-envvars-d5ef3872-5019-4395-834c-49dc67d2c7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47361688s
Aug 28 14:58:32.656: INFO: Pod "client-envvars-d5ef3872-5019-4395-834c-49dc67d2c7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.699159425s
Aug 28 14:58:34.680: INFO: Pod "client-envvars-d5ef3872-5019-4395-834c-49dc67d2c7c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.723378027s
STEP: Saw pod success
Aug 28 14:58:34.681: INFO: Pod "client-envvars-d5ef3872-5019-4395-834c-49dc67d2c7c4" satisfied condition "Succeeded or Failed"
Aug 28 14:58:34.686: INFO: Trying to get logs from node kali-worker pod client-envvars-d5ef3872-5019-4395-834c-49dc67d2c7c4 container env3cont: 
STEP: delete the pod
Aug 28 14:58:34.762: INFO: Waiting for pod client-envvars-d5ef3872-5019-4395-834c-49dc67d2c7c4 to disappear
Aug 28 14:58:35.158: INFO: Pod client-envvars-d5ef3872-5019-4395-834c-49dc67d2c7c4 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:58:35.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7852" for this suite.

• [SLOW TEST:13.745 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4666,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:58:35.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6918.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6918.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6918.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6918.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6918.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6918.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6918.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6918.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6918.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6918.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 145.124.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.124.145_udp@PTR;check="$$(dig +tcp +noall +answer +search 145.124.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.124.145_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6918.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6918.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6918.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6918.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6918.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6918.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6918.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6918.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6918.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6918.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6918.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 145.124.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.124.145_udp@PTR;check="$$(dig +tcp +noall +answer +search 145.124.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.124.145_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 28 14:58:49.570: INFO: Unable to read wheezy_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:49.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:49.580: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:49.584: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:49.756: INFO: Unable to read jessie_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:49.759: INFO: Unable to read jessie_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:49.763: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:49.766: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:49.788: INFO: Lookups using dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2 failed for: [wheezy_udp@dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_udp@dns-test-service.dns-6918.svc.cluster.local jessie_tcp@dns-test-service.dns-6918.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local]

Aug 28 14:58:54.807: INFO: Unable to read wheezy_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:54.824: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:54.836: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:54.938: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:54.982: INFO: Unable to read jessie_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:54.987: INFO: Unable to read jessie_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:55.143: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:55.148: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:55.184: INFO: Lookups using dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2 failed for: [wheezy_udp@dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_udp@dns-test-service.dns-6918.svc.cluster.local jessie_tcp@dns-test-service.dns-6918.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local]

Aug 28 14:58:59.796: INFO: Unable to read wheezy_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:59.801: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:59.805: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:59.809: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:59.839: INFO: Unable to read jessie_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:59.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:59.846: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:59.850: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:58:59.880: INFO: Lookups using dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2 failed for: [wheezy_udp@dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_udp@dns-test-service.dns-6918.svc.cluster.local jessie_tcp@dns-test-service.dns-6918.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local]

Aug 28 14:59:04.796: INFO: Unable to read wheezy_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:04.802: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:04.806: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:04.810: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:04.862: INFO: Unable to read jessie_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:04.866: INFO: Unable to read jessie_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:04.870: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:04.873: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:04.897: INFO: Lookups using dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2 failed for: [wheezy_udp@dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_udp@dns-test-service.dns-6918.svc.cluster.local jessie_tcp@dns-test-service.dns-6918.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local]

Aug 28 14:59:09.795: INFO: Unable to read wheezy_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:09.800: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:09.805: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:09.810: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:09.881: INFO: Unable to read jessie_udp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:09.885: INFO: Unable to read jessie_tcp@dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:09.889: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:09.893: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:09.917: INFO: Lookups using dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2 failed for: [wheezy_udp@dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@dns-test-service.dns-6918.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_udp@dns-test-service.dns-6918.svc.cluster.local jessie_tcp@dns-test-service.dns-6918.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local]

Aug 28 14:59:15.398: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local from pod dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2: the server could not find the requested resource (get pods dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2)
Aug 28 14:59:15.628: INFO: Lookups using dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2 failed for: [wheezy_tcp@_http._tcp.dns-test-service.dns-6918.svc.cluster.local]

Aug 28 14:59:20.187: INFO: DNS probes using dns-6918/dns-test-dd4c7950-bfb2-4744-bf3b-3b207b8074e2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:59:21.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6918" for this suite.

• [SLOW TEST:45.783 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":272,"skipped":4668,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:59:21.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 28 14:59:21.161: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 28 14:59:21.233: INFO: Waiting for terminating namespaces to be deleted...
Aug 28 14:59:21.238: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 28 14:59:21.252: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 28 14:59:21.252: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 14:59:21.252: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 28 14:59:21.252: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 14:59:21.252: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 28 14:59:21.252: INFO: 	Container app ready: true, restart count 0
Aug 28 14:59:21.252: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 28 14:59:21.265: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 28 14:59:21.265: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 28 14:59:21.265: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 28 14:59:21.265: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 28 14:59:21.265: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 28 14:59:21.265: INFO: 	Container app ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8ed8f8a3-35d4-47bd-9e75-ad2c31642d86 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-8ed8f8a3-35d4-47bd-9e75-ad2c31642d86 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8ed8f8a3-35d4-47bd-9e75-ad2c31642d86
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 14:59:42.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-683" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:21.047 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":273,"skipped":4669,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 14:59:42.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 28 14:59:42.337: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 28 14:59:42.390: INFO: Number of nodes with available pods: 0
Aug 28 14:59:42.390: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 28 14:59:42.546: INFO: Number of nodes with available pods: 0
Aug 28 14:59:42.546: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:43.552: INFO: Number of nodes with available pods: 0
Aug 28 14:59:43.552: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:44.554: INFO: Number of nodes with available pods: 0
Aug 28 14:59:44.554: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:45.728: INFO: Number of nodes with available pods: 0
Aug 28 14:59:45.728: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:46.679: INFO: Number of nodes with available pods: 0
Aug 28 14:59:46.679: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:47.693: INFO: Number of nodes with available pods: 1
Aug 28 14:59:47.693: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 28 14:59:48.031: INFO: Number of nodes with available pods: 1
Aug 28 14:59:48.031: INFO: Number of running nodes: 0, number of available pods: 1
Aug 28 14:59:49.131: INFO: Number of nodes with available pods: 0
Aug 28 14:59:49.132: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 28 14:59:49.703: INFO: Number of nodes with available pods: 0
Aug 28 14:59:49.703: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:50.777: INFO: Number of nodes with available pods: 0
Aug 28 14:59:50.777: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:51.711: INFO: Number of nodes with available pods: 0
Aug 28 14:59:51.711: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:52.752: INFO: Number of nodes with available pods: 0
Aug 28 14:59:52.752: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:53.817: INFO: Number of nodes with available pods: 0
Aug 28 14:59:53.818: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:54.735: INFO: Number of nodes with available pods: 0
Aug 28 14:59:54.735: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:55.709: INFO: Number of nodes with available pods: 0
Aug 28 14:59:55.709: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:57.004: INFO: Number of nodes with available pods: 0
Aug 28 14:59:57.004: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:57.708: INFO: Number of nodes with available pods: 0
Aug 28 14:59:57.708: INFO: Node kali-worker2 is running more than one daemon pod
Aug 28 14:59:58.724: INFO: Number of nodes with available pods: 1
Aug 28 14:59:58.724: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7581, will wait for the garbage collector to delete the pods
Aug 28 14:59:58.962: INFO: Deleting DaemonSet.extensions daemon-set took: 108.094839ms
Aug 28 14:59:59.362: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.461942ms
Aug 28 15:00:04.395: INFO: Number of nodes with available pods: 0
Aug 28 15:00:04.395: INFO: Number of running nodes: 0, number of available pods: 0
Aug 28 15:00:04.400: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7581/daemonsets","resourceVersion":"1783746"},"items":null}

Aug 28 15:00:04.403: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7581/pods","resourceVersion":"1783746"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 15:00:04.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7581" for this suite.

• [SLOW TEST:22.540 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":274,"skipped":4675,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 28 15:00:04.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 28 15:00:05.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-5611" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":275,"skipped":4686,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 28 15:00:05.096: INFO: Running AfterSuite actions on all nodes
Aug 28 15:00:05.097: INFO: Running AfterSuite actions on node 1
Aug 28 15:00:05.097: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 7406.809 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS